How to deploy an openstack instance with ansible in a specific project - ansible

I've been trying to deploy an instance in openstack to a different project then my users default project. The only way to do this appears to be by passing the project_name within the auth: setting. This works fine, but is not really compatible with using a clouds.yaml config with the clouds: setting or even with using the admin-openrc.sh file that openstack provides. (The admin-openrc.sh appears to take precedence over any settings in auth:).
I'm using the current openstack.cloud collection 1.3.0 (https://docs.ansible.com/ansible/latest/collections/openstack/cloud/index.html). Some of the modules have the option to specify a project: like the network module, but the one server module does not.
So this deploys in a named project:
- name: Create instances
server:
state: present
auth:
auth_url: "{{ auth_url}}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project }}"
project_domain_name: "{{ domain_name }}"
user_domain_name: "{{ domain_name }}"
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
When having sourced the admin-openrc.sh, this deploys only to your default project (OS_PROJECT_NAME=<project_name>
- name: Create instances
server:
state: present
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
When I unset the OS_PROJECT_NAME, but set all other values from admin-openrc.sh, I can do this, but this requires to work with a non-default setting (unsetting the one enviromental variable:
- name: Create instances
server:
state: present
auth:
project_name: "{{ project }}"
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
I'm looking for the most usefull way to use a specific authorization model (be it clouds.yaml or environmental variables) for all my openstack modules, while still being able to deploy to a specific project.

(You should upgrade to the last collection (1.5.3), or perhaps it is compatible with 1.3.0)
You can use the cloud property from the
"server" task (openstack.cloud.server). Here is how you can proceed :
All projects definitions are stored into clouds.yml (here is a part of its content)
clouds:
tarantula:
auth:
auth_url: https://auth.cloud.myprovider.net/v3/
project_name: tarantula
project_id: the-id-of-my-project
user_domain_name: Default
project_domain_name: Default
username: my-username
password: my-password
regions :
- US
- EU1
from a task you can refer to the appropriate cloud like this
- name: Create instances
server:
state: present
cloud: tarantula
region_name: EU1
name: "test-instance-1"

We now refrain from using any environment variables and make sure the project id is set in the configuration of in the group_vars. This works because it does not depend on a local clouds.yml file. We basically build and auth object in Ansible that can be use thoughout the deployment

Related

Ansible Playbook with include_tasks

I have an Ansible playbook called main.yml that builds a Windows VM using an OVF template. It then configures the VM using variables that I pass. Typical Ansible stuff.
What I would like to do is have another task inside the main.yml run at the end that configures a couple things on the VM (configureserver.yml). I pass "{{ vmName }}" as a variable in the main.yml, and want the 2nd playbook (configureserver.yml) to run against that `"{{ vmName }}" variable.
Main.yml (removed some of the customization to shorten this post)
- name: Deploy Virtual Machine from an OVF template in content library
community.vmware.vmware_content_deploy_ovf_template:
name: "{{ vmName }}"
hostname: "{{ vcenterName }}"
username: "{{ vmwareUser }}"
password: "{{ vmwarePassword }}"
template: "{{ templateName }}"
- name: Configure the VM
community.vmware.vmware_guest:
name: "{{ vmName }}"
hostname: "{{ vcenterName }}"
username: "{{ vmwareUser }}"
password: "{{ vmwarePassword }}"
datacenter: "{{ datacenterName }}"
cluster: "{{ clusterName }}"
datastore: "{{ datastoreclusterName }}"
# THIS IS WHAT I WANT TO ADD
- name: Configure the D Drive
include_tasks:
file: configureserver.yml
configureserver.yml
- hosts: "{{ vmName }}"
gather_facts: no
become: yes
vars_files:
- passwords.yml
- vars.test.yml
vars:
- ansible_connection: ssh
- ansible_shell_type: cmd
- ansible_become_method: runas
- ansible_become_user: System
In the code above, I want to run a separate task in a separate yml that will configure the server. I want to run the configureserver.yml playbook against the "{{ vmName }}" variable as the inventory or host.
Glad to provide more details if needed.
I figured it out. I changed the include_tasks to include_playbook and that did the trick

Setting Ansible vars with set_fact results

Im running ansible 2.9.18 on RHEL7.
I am using hvac to retrieve usernames and passwords from a Hashicorp vault.
vars:
- creds: "{{ lookup('hashi_vault', 'secret=tst2/data/cisco token= url=http://10.80.23.81:8200') }}"
tasks:
- name: set Cisco creds
set_fact:
cisco: "{{ creds['data'] }}"
- name: Get nxos facts
nxos_command:
username: "{{ cisco['username'] }}"
password: "{{ cisco['password'] }}"
commands: show ver
timeout: 30
register: ver_out
- debug: msg="{{ ver_out.stdout }}"
But username and password are deprecated and I am trying to figure out how to pass the username, password as a "provider" variable. And this code doesn't work:
vars:
asa_api:
- creds: "{{ lookup('hashi_vault', 'secret=tst2/data/cisco token= url=http://10.80.23.81:8200') }}"
set_fact:
cisco: "{{ creds['data'] }}"
username: "{{ cisco['username'] }}"
password: "{{ cisco['password'] }}"
tasks:
- name: show run
asa_command:
commands: show run
provider: "{{ asa_api }}"
register: run
become: yes
tags:
- show_run
I cannot figure how syntax for making this work. I would greatly appreciate any help.
Thanks,
Steve
Disclaimer: This is a generic answer. I do not have any network device to test this fully so you might have to adapt a bit after reading the documentation
Your are taking this the wrong way. You don't need set_fact at all and both method you are trying to use (user/pass or provider dict) are actually deprecated. Ansible treats you network device as any host and will use the available user and password you have configured if they exist.
In the following example, I'm assuming your playbook only targets network devices and that the login/pass stored in your vault is the same on all devices.
- name: Untested network device connection configuration demo
hosts: my_network_device_group
vars:
# This indicates which connection plugin to use. Default is ssh
# An other possible value is httpapi. See above documentation link
ansible_connection: network_cli
vault_secret: tst2/data/cisco
vault_token: verysecret
vault_url: http://10.80.23.81:8200
vault_options: "secret={{ vault_secret }} token={{ vault_token }} url={{ vault_url }}"
creds: "{{ lookup('hashi_vault', vault_options).data }}"
# These are the user and pass used for connection.
ansible_user: "{{ creds.username }}"
ansible_password: "{{ creds.password }}"
tasks:
- name: Get nxos version
nxos_command:
commands: show ver
timeout: 30
register: ver_cmd
- name: show version
debug:
msg: "NXOS version on {{ inventory_hostname }} is {{ ver_cmd.stdout }}"
- name: An other task to play on targets
debug:
msg: "Task played on {{ inventory_hostname }}"
Rather than vars at play level, you can store this information in your inventory for all hosts or for a specific group, even for each host. See how to organise your group and host variables if you want to use that feature.

Is there an rest api or ansible module to register an VM in vCenter

I know we can do this in PowerCli, but would like to know if there is any other method to register an VM in vcenter like using vcenter api or ansible ?
For migrating a machine vCenter A to vCenter B, my vRO does not have access to the vcenterB, but I need to register the machine in vCenter B. It would be easy if I can use rest api or ansible module to do it or is there a way I can use VMWare converter with vRO ?
Here recently a module was added to Ansible that can do just that. The module: https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/vmware/vmware_guest_register_operation.py
Some example code:
- name: Register VM to inventory
vmware_guest_register_operation:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
folder: "/vm"
esxi_hostname: "{{ esxi_hostname }}"
name: "{{ vm_name }}"
template: no
path: "[datastore1] vm/vm.vmx"
state: present

Ansible for Openshift Deployment

I want to deploy the pods on Openshift using Ansible Playbook.
For this, i have written the following play :
- name: Create Deployment Config for the usecase
with_dict: "{{ apps }}"
openshift_v1_deployment_config:
name: "{{ item.key }}"
namespace: "{{ usecaseId }}"
labels:
app: "{{ item.key }}"
service: "{{ item.key }}"
replicas: 1
selector:
app: "{{ item.key }}"
service: "{{ item.key }}"
spec_template_metadata_labels:
app: "{{ item.key }}"
service: "{{ item.key }}"
containers:
- env:
image: "{{ openshift_registry_svc_url }}/{{ usecaseId }}/{{ item.key }}"
name: "{{ item.key }}"
ports:
- container_port: 8080
protocol: TCP
Anyone having idea how can i get the ip-address of the deployed pod using ansible itself.TIA
Shagun , I do not think that you would be able to get the IP address of the pods outside the cluster as the IP are managed by the openshift SAN the outside world can connect to pods are by routes, port forwarding ,Manually assign an external IP to a service
Hope this url helps with the methods to connect to pods https://docs.openshift.com/container-platform/3.5/dev_guide/expose_service/index.html
FInally I found it .
This could be done using following kubernetes module provided by ansible : k8s
e.g. :
- name: Fetch all pods which are running
set_fact:
deployments_pod: "{{ lookup('k8s', kind='Pod', namespace=test) }}"

Ansible - How to loop and get attributes of new EC2 instances

I'm creating a set of new EC2 instances using this play
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Provision a set of instances
ec2:
assign_public_ip: yes
aws_access_key: XXXXXXXXXX
aws_secret_key: XXXXXXXXXX
group_id: XXXXXXXXXX
instance_type: t2.micro
image: ami-32a85152
vpc_subnet_id: XXXXXXXXXX
region: XXXXXXXXXX
user_data: "{{ lookup('file', '/SOME_PATH/cloud-config.yaml') }}"
wait: true
exact_count: 1
count_tag:
Name: Demo
instance_tags:
Name: Demo
register: ec2
- name: Add new CoreOS machines to coreos-launched group
add_host: hostname="{{ item.public_ip }}" groups=coreos-launched
with_items: "{{ ec2.instances }}"
- name: Wait for SSH to come up
wait_for: host="{{ item.public_dns_name }}" port=22 delay=60 timeout=320
with_items: "{{ ec2.instances }}"
Now, I need to create an SSL/TLS certificate for every of those new machines. To do so, I require their private IP. Yet, I don't know how to access the "{{ ec2.instances }}" I registered in the previous play.
I tried something, in the same playbook, to do something like this
- hosts: coreos-launched
gather_facts: False
tasks:
- name: Find the current machine IP addresse
command: echo "{{ item.private_ip }}" > /tmp/private_ip
with_items: "{{ ec2.instances }}"
sudo: yes
But without any success. Is there a way to use the "{{ ec2.instances }}" items inside a same playbook but in a different play?
-- EDIT --
Following Theo advices, I manage to get the instances attributes using
- name: Gather current facts
action: ec2_facts
register: ec2_facts
- name: Use the current facts
command: echo "{{ ec2_facts.ansible_facts.ansible_ec2_local_ipv4 }}"
with_items: "{{ ec2_facts }}"
The best way to learn about a return structure (short of the documentation) is to wrap it in a debug task.
- name: Debug ec2 variable
debug: var=ec2.instances
From there, follow the structure to get the variable you seek.
Also (following the idempotence model), you can use the ec2_remote_facts module to get the facts from the instances and call them in future plays/tasks as well.
See Variables -- Ansible Documentation for more info about calling registered variables.

Resources