To add a startup script when deploying a windows instance in GCP - windows

I'm deploying a Windows Instance in GCP with couple of startup scripts using the metadata option in the gcp_compute_instance module, as the instance is getting created as expected but the startup scripts are not getting executing, kindly refer the below task and do suggest what changes I need to do to execute the startup script (first creating a local admin user with password and then setting winrm basic authentication to true)
- name: create a instance
gcp_compute_instance:
state: present
name: "{{ vm_name }}"
machine_type: "{{ machine_type }}"
metadata:
startup-script: |
New-LocalUser -AccountNeverExpires:$true -Password ( ConvertTo-SecureString -AsPlainText -Force 'Password123!') -Name 'adminuser1' |Add-LocalGroupMember -Group administrators
winrm set winrm/config/service '#{AllowUnencrypted="true"}'
winrm set winrm/config/service/auth '#{Basic="true"}'
disks:
- auto_delete: true
boot: true
source: "{{ disk }}"
- auto_delete: true
boot: false
interface: NVME
type: SCRATCH
initialize_params:
disk_type: local-ssd
- auto_delete: true
boot: false
interface: NVME
type: SCRATCH
initialize_params:
disk_type: local-ssd
network_interfaces:
- network: "{{ network }}"
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
register: instance

The fine manual says the key should be, in your case, windows-startup-script-ps1: since that script is powershell

Related

Multi VM Create using ansible playbook from Templates

Problem Statement:
while trying to create a VM from a template especially in the case of a windows OS template(working fine for linux OS template). VM after being created through ansible playbook is "powered off".
Here is my playbook to create multiple VMS from template, please help me in making this playbook more efficient...thanks in advance.
---
- name: Create a multiple VMs asynchronously
hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Clone the template
vmware_guest:
hostname: xxx.xxx.xx
username: username
password: password
validate_certs: False
name: "{{ item }}"
template: Template-Win-2016
cluster: cluster
datacenter: Dev and QA DC
folder: /
state: poweredon
hardware:
memory_mb: 12288
num_cpus: 4
disk:
- size_gb: 60
type: thin
autoselect_datastore: True
wait_for_ip_address: yes
with_items:
- testvm001
- testvm002
register: vm_create
async: 600
poll: 0
- name: Waiting for status
async_status:
jid: "{{ item.ansible_job_id }}"
register: job_result
until: job_result.finished
delay: 60
retries: 60
with_items: "{{ vm_create.results }}"

How to deploy an openstack instance with ansible in a specific project

I've been trying to deploy an instance in openstack to a different project then my users default project. The only way to do this appears to be by passing the project_name within the auth: setting. This works fine, but is not really compatible with using a clouds.yaml config with the clouds: setting or even with using the admin-openrc.sh file that openstack provides. (The admin-openrc.sh appears to take precedence over any settings in auth:).
I'm using the current openstack.cloud collection 1.3.0 (https://docs.ansible.com/ansible/latest/collections/openstack/cloud/index.html). Some of the modules have the option to specify a project: like the network module, but the one server module does not.
So this deploys in a named project:
- name: Create instances
server:
state: present
auth:
auth_url: "{{ auth_url}}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project }}"
project_domain_name: "{{ domain_name }}"
user_domain_name: "{{ domain_name }}"
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
When having sourced the admin-openrc.sh, this deploys only to your default project (OS_PROJECT_NAME=<project_name>
- name: Create instances
server:
state: present
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
When I unset the OS_PROJECT_NAME, but set all other values from admin-openrc.sh, I can do this, but this requires to work with a non-default setting (unsetting the one enviromental variable:
- name: Create instances
server:
state: present
auth:
project_name: "{{ project }}"
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
I'm looking for the most usefull way to use a specific authorization model (be it clouds.yaml or environmental variables) for all my openstack modules, while still being able to deploy to a specific project.
(You should upgrade to the last collection (1.5.3), or perhaps it is compatible with 1.3.0)
You can use the cloud property from the
"server" task (openstack.cloud.server). Here is how you can proceed :
All projects definitions are stored into clouds.yml (here is a part of its content)
clouds:
tarantula:
auth:
auth_url: https://auth.cloud.myprovider.net/v3/
project_name: tarantula
project_id: the-id-of-my-project
user_domain_name: Default
project_domain_name: Default
username: my-username
password: my-password
regions :
- US
- EU1
from a task you can refer to the appropriate cloud like this
- name: Create instances
server:
state: present
cloud: tarantula
region_name: EU1
name: "test-instance-1"
We now refrain from using any environment variables and make sure the project id is set in the configuration of in the group_vars. This works because it does not depend on a local clouds.yml file. We basically build and auth object in Ansible that can be use thoughout the deployment

How to store MySQL password on remote host in Ansible?

Using the mysql_user module, I store the new password in a file, however, it stores it on my localhost. I want to save it to the remote host instead. How can I send the file to the /tmp/ directory on the remote machine?
- name: Create MySQL user on Dev
mysql_user:
login_unix_socket: /var/run/mysqld/mysqld.sock
login_host: my_remote_host
login_user: "{{ MYSQL_USER }}"
login_password: "{{ MYSQL_PASS }}"
name: "{{ name }}"
password: "{{ lookup('password', '/tmp/new_password.txt chars=ascii_letters,digits,hexdigits,punctuation length=10') }}"
host: 192.168.%
priv: '*.*:ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,DROP,EVENT,EXECUTE,GRANT OPTION,INDEX,INSERT,LOCK TABLES,PROCESS,SELECT,SHOW DATABASES,SHOW VIEW,TRIGGER,UPDATE'
state: present
I have a feeling I cannot register the password parameter according to this answer, so I am editing my question.
In Ansible, lookup runs on the control host where Ansible is running. There's no way to have the password lookup create the file directly on the remote host.
You could use the copy module after generating your password:
- name: Create MySQL user on Dev
mysql_user:
login_unix_socket: /var/run/mysqld/mysqld.sock
login_host: my_remote_host
login_user: "{{ MYSQL_USER }}"
login_password: "{{ MYSQL_PASS }}"
name: "{{ name }}"
password: "{{ lookup('password', '/tmp/new_password.txt chars=ascii_letters,digits,hexdigits,punctuation length=10') }}"
host: 192.168.%
priv: '*.*:ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,DROP,EVENT,EXECUTE,GRANT OPTION,INDEX,INSERT,LOCK TABLES,PROCESS,SELECT,SHOW DATABASES,SHOW VIEW,TRIGGER,UPDATE'
state: present
- name: copy password file to remote host
copy:
src: /tmp/new_password.txt
dest: /tmp/new_password.txt
But using a template task to e.g. generate a my.cnf file on the remote would also be a reasonable solution.
Try using a template task
- name: Save mysql password file
template:
src: mysql.j2
dest: /tmp/mysql-password-file
mysql.j2
{{ password }}

Ansible how restart service one after one on different hosts

Hy every one !
pls help .
I have 3 server. Each of them have one systemd service. I need reboot this service by one.
So after i reboot service on host 1 , and this service is up ( i can check tcp port), i need reboot service on host 2 and so on.
How i can do it with ansible
Now i have such playbook:
---
- name: Install business service
hosts: business
vars:
app_name: "e-service"
app: "{{ app_name }}-{{ tag }}.war"
app_service: "{{ app_name }}.service"
app_bootstrap: "{{ app_name }}_bootstrap.yml"
app_folder: "{{ eps_client_dir }}/{{ app_name }}"
archive_folder: "{{ app_folder }}/archives/arch_{{ansible_date_time.date}}_{{ansible_date_time.hour}}_{{ansible_date_time.minute}}"
app_distrib_dir: "{{ eps_distrib_dir }}/{{ app_name }}"
app_dependencies: "{{ app_distrib_dir }}/dependencies.tgz"
tasks:
- name: Copy app {{ app }} to {{ app_folder }}
copy:
src: "{{ app_distrib_dir }}/{{ app }}"
dest: "{{ app_folder }}/{{ app }}"
group: ps_group
owner: ps
mode: 0644
notify:
- restart app
- name: Copy service setting to /etc/systemd/system/{{app_service}}
template:
src: "{{ app_distrib_dir }}/{{ app_service }}"
dest: /etc/systemd/system/{{ app_service }}
mode: 0644
notify:
- restart app
- name: Start service {{ app }}
systemd:
daemon-reload: yes
name: "{{ app_service }}"
state: started
enabled: true
handlers:
- name: restart app
systemd:
daemon-reload: yes
name: "{{ app_service }}"
state: restarted
enabled: true
and all service restart at one time.
try serial and max_fail_percentage, max_fail_percentage value is percent of the whole number of your hosts, if server 1 failed, then the rest server will not run,
---
- name: Install eps-business service
hosts: business
serial: 1
max_fail_percentage: 10
Try with
---
- name: Install eps-business service
hosts: business
serial: 1
By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling update use case, you can define how many hosts Ansible should manage at a single time by using the serial keyword
https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html
create a script that reboots your service and then add a loop that will check whether the service is up or not and once it is executed successfully based on the status it has returned you can go for the next service.

How to change vmware network adapter with ansible

Im using ansible 2.9.2, i need to replace network adapter on vmware specific vm.
In my vcenter vm settings i see :
Networks: Vlan_12
My playbook doesnt see that network name.
tasks:
- name: Changing network adapter
vmware_guest_network:
datacenter: "{{ datacenter"}}
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
folder: "{{ folder }}"
cluster: "{{ cluster }}"
validate_certs: no
name: test
networks:
- name: "Vlan_12"
vlan: "Vlan_12"
connected: false
state: absent
register: output
I get this error:
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Network 'Vlan_12' does not exist."}
Im trying to replace vlan_12 with another network adapter named Vlan_13, so i tried first to delete the exsisting network adapter. in ansible docs they have a very limited examples.
Thanks.
You dont have to poweroff the machine while addin/removing networks.
You can remove/add nic on the fly, well at least on linux vm's not sure about Win vm's.
Changing networks live works just fine. All you need is state: present, the label of your current interface (almost certainly 'Network adapter 1', which is the default for the first network interface), and the name of the port group that you'd like to connect to this interface.
Here's the playbook I've been using:
---
- hosts: localhost
gather_facts: no
tasks:
- name: migrate network
vmware_guest_network:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter: '{{ datacenter }}'
validate_certs: False
name: '{{ vm_hostname }}'
gather_network_info: False
networks:
- state: present
label: "Network adapter 1"
name: '{{ new_net_name }}'
delegate_to: localhost
You need vmware specific machine to be powerdoff so you can change network adapter:
tasks:
- name: Changing network adapter
vmware_guest_network:
datacenter: "{{ datacenter"}}
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
folder: "{{ folder }}"
cluster: "{{ cluster }}"
validate_certs: no
name: test
networks:
- name: "Vlan_12"
label: "Network adapter 1"
connected: False
state: absent
- label: "Network adapter 1"
state: new
connected: True
name: Vlan_13
This playbook deletes the present network adapter and the it adds the new adapter instead. i couldnt find a way to change. only delete and add.

Resources