Using the mysql_user module, I store the new password in a file, however, it stores it on my localhost. I want to save it to the remote host instead. How can I send the file to the /tmp/ directory on the remote machine?
- name: Create MySQL user on Dev
mysql_user:
login_unix_socket: /var/run/mysqld/mysqld.sock
login_host: my_remote_host
login_user: "{{ MYSQL_USER }}"
login_password: "{{ MYSQL_PASS }}"
name: "{{ name }}"
password: "{{ lookup('password', '/tmp/new_password.txt chars=ascii_letters,digits,hexdigits,punctuation length=10') }}"
host: 192.168.%
priv: '*.*:ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,DROP,EVENT,EXECUTE,GRANT OPTION,INDEX,INSERT,LOCK TABLES,PROCESS,SELECT,SHOW DATABASES,SHOW VIEW,TRIGGER,UPDATE'
state: present
I have a feeling I cannot register the password parameter according to this answer, so I am editing my question.
In Ansible, lookup runs on the control host where Ansible is running. There's no way to have the password lookup create the file directly on the remote host.
You could use the copy module after generating your password:
- name: Create MySQL user on Dev
mysql_user:
login_unix_socket: /var/run/mysqld/mysqld.sock
login_host: my_remote_host
login_user: "{{ MYSQL_USER }}"
login_password: "{{ MYSQL_PASS }}"
name: "{{ name }}"
password: "{{ lookup('password', '/tmp/new_password.txt chars=ascii_letters,digits,hexdigits,punctuation length=10') }}"
host: 192.168.%
priv: '*.*:ALTER,ALTER ROUTINE,CREATE,CREATE ROUTINE,CREATE TEMPORARY TABLES,CREATE VIEW,DELETE,DROP,EVENT,EXECUTE,GRANT OPTION,INDEX,INSERT,LOCK TABLES,PROCESS,SELECT,SHOW DATABASES,SHOW VIEW,TRIGGER,UPDATE'
state: present
- name: copy password file to remote host
copy:
src: /tmp/new_password.txt
dest: /tmp/new_password.txt
But using a template task to e.g. generate a my.cnf file on the remote would also be a reasonable solution.
Try using a template task
- name: Save mysql password file
template:
src: mysql.j2
dest: /tmp/mysql-password-file
mysql.j2
{{ password }}
Related
One of the many Ansible Community.VMWare modules parameters is 'hostname', which is the name of the ESXi server.
In my case, a guest could be in one of multiple ESXi servers (8, for now), and also a new server could be added by the support team at any time.
Is there a way to find on which ESXi server a guest is? Or is it mandatory that I know this at start?
I could have a list of the ESXi servers, keep updating it on demand, and loop over this list using module 'community.vmware.vmware_guest_find' and "with_items", but actually, I don't know how would I do this (iterate over the servers, changing the 'hostname', and stopping when I finally find the guest).
Any help?
I came up with this solution below. It's necessary to have previously the list of ESXi hosts.
[...]
vars:
vcenters_hostname:
- vcenter01
- vcenter02
- ...
[...]
- block:
- name: Navigate throughout all vcenters looking for the guest
community.vmware.vmware_guest_find:
hostname: "{{ item }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ guest_name }}"
validate_certs: no
delegate_to: localhost
register: guest_find_result
with_items: "{{ vcenter_hostnames }}"
rescue:
- name: Doing nothing only to don't raise a fail message
meta: noop
always:
- name: Record which vcenter and folder is the guest
ansible.builtin.set_fact:
guest_folder: "{{ item['folders'][0] }}"
vcenter_hostname: "{{ item['item'] }}"
with_items: "{{ guest_find_result['results'] }}"
when: item['failed'] == false
I have an Ansible playbook called main.yml that builds a Windows VM using an OVF template. It then configures the VM using variables that I pass. Typical Ansible stuff.
What I would like to do is have another task inside the main.yml run at the end that configures a couple things on the VM (configureserver.yml). I pass "{{ vmName }}" as a variable in the main.yml, and want the 2nd playbook (configureserver.yml) to run against that `"{{ vmName }}" variable.
Main.yml (removed some of the customization to shorten this post)
- name: Deploy Virtual Machine from an OVF template in content library
community.vmware.vmware_content_deploy_ovf_template:
name: "{{ vmName }}"
hostname: "{{ vcenterName }}"
username: "{{ vmwareUser }}"
password: "{{ vmwarePassword }}"
template: "{{ templateName }}"
- name: Configure the VM
community.vmware.vmware_guest:
name: "{{ vmName }}"
hostname: "{{ vcenterName }}"
username: "{{ vmwareUser }}"
password: "{{ vmwarePassword }}"
datacenter: "{{ datacenterName }}"
cluster: "{{ clusterName }}"
datastore: "{{ datastoreclusterName }}"
# THIS IS WHAT I WANT TO ADD
- name: Configure the D Drive
include_tasks:
file: configureserver.yml
configureserver.yml
- hosts: "{{ vmName }}"
gather_facts: no
become: yes
vars_files:
- passwords.yml
- vars.test.yml
vars:
- ansible_connection: ssh
- ansible_shell_type: cmd
- ansible_become_method: runas
- ansible_become_user: System
In the code above, I want to run a separate task in a separate yml that will configure the server. I want to run the configureserver.yml playbook against the "{{ vmName }}" variable as the inventory or host.
Glad to provide more details if needed.
I figured it out. I changed the include_tasks to include_playbook and that did the trick
I've been trying to deploy an instance in openstack to a different project then my users default project. The only way to do this appears to be by passing the project_name within the auth: setting. This works fine, but is not really compatible with using a clouds.yaml config with the clouds: setting or even with using the admin-openrc.sh file that openstack provides. (The admin-openrc.sh appears to take precedence over any settings in auth:).
I'm using the current openstack.cloud collection 1.3.0 (https://docs.ansible.com/ansible/latest/collections/openstack/cloud/index.html). Some of the modules have the option to specify a project: like the network module, but the one server module does not.
So this deploys in a named project:
- name: Create instances
server:
state: present
auth:
auth_url: "{{ auth_url}}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project }}"
project_domain_name: "{{ domain_name }}"
user_domain_name: "{{ domain_name }}"
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
When having sourced the admin-openrc.sh, this deploys only to your default project (OS_PROJECT_NAME=<project_name>
- name: Create instances
server:
state: present
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
When I unset the OS_PROJECT_NAME, but set all other values from admin-openrc.sh, I can do this, but this requires to work with a non-default setting (unsetting the one enviromental variable:
- name: Create instances
server:
state: present
auth:
project_name: "{{ project }}"
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
I'm looking for the most usefull way to use a specific authorization model (be it clouds.yaml or environmental variables) for all my openstack modules, while still being able to deploy to a specific project.
(You should upgrade to the last collection (1.5.3), or perhaps it is compatible with 1.3.0)
You can use the cloud property from the
"server" task (openstack.cloud.server). Here is how you can proceed :
All projects definitions are stored into clouds.yml (here is a part of its content)
clouds:
tarantula:
auth:
auth_url: https://auth.cloud.myprovider.net/v3/
project_name: tarantula
project_id: the-id-of-my-project
user_domain_name: Default
project_domain_name: Default
username: my-username
password: my-password
regions :
- US
- EU1
from a task you can refer to the appropriate cloud like this
- name: Create instances
server:
state: present
cloud: tarantula
region_name: EU1
name: "test-instance-1"
We now refrain from using any environment variables and make sure the project id is set in the configuration of in the group_vars. This works because it does not depend on a local clouds.yml file. We basically build and auth object in Ansible that can be use thoughout the deployment
Im running ansible 2.9.18 on RHEL7.
I am using hvac to retrieve usernames and passwords from a Hashicorp vault.
vars:
- creds: "{{ lookup('hashi_vault', 'secret=tst2/data/cisco token= url=http://10.80.23.81:8200') }}"
tasks:
- name: set Cisco creds
set_fact:
cisco: "{{ creds['data'] }}"
- name: Get nxos facts
nxos_command:
username: "{{ cisco['username'] }}"
password: "{{ cisco['password'] }}"
commands: show ver
timeout: 30
register: ver_out
- debug: msg="{{ ver_out.stdout }}"
But username and password are deprecated and I am trying to figure out how to pass the username, password as a "provider" variable. And this code doesn't work:
vars:
asa_api:
- creds: "{{ lookup('hashi_vault', 'secret=tst2/data/cisco token= url=http://10.80.23.81:8200') }}"
set_fact:
cisco: "{{ creds['data'] }}"
username: "{{ cisco['username'] }}"
password: "{{ cisco['password'] }}"
tasks:
- name: show run
asa_command:
commands: show run
provider: "{{ asa_api }}"
register: run
become: yes
tags:
- show_run
I cannot figure how syntax for making this work. I would greatly appreciate any help.
Thanks,
Steve
Disclaimer: This is a generic answer. I do not have any network device to test this fully so you might have to adapt a bit after reading the documentation
Your are taking this the wrong way. You don't need set_fact at all and both method you are trying to use (user/pass or provider dict) are actually deprecated. Ansible treats you network device as any host and will use the available user and password you have configured if they exist.
In the following example, I'm assuming your playbook only targets network devices and that the login/pass stored in your vault is the same on all devices.
- name: Untested network device connection configuration demo
hosts: my_network_device_group
vars:
# This indicates which connection plugin to use. Default is ssh
# An other possible value is httpapi. See above documentation link
ansible_connection: network_cli
vault_secret: tst2/data/cisco
vault_token: verysecret
vault_url: http://10.80.23.81:8200
vault_options: "secret={{ vault_secret }} token={{ vault_token }} url={{ vault_url }}"
creds: "{{ lookup('hashi_vault', vault_options).data }}"
# These are the user and pass used for connection.
ansible_user: "{{ creds.username }}"
ansible_password: "{{ creds.password }}"
tasks:
- name: Get nxos version
nxos_command:
commands: show ver
timeout: 30
register: ver_cmd
- name: show version
debug:
msg: "NXOS version on {{ inventory_hostname }} is {{ ver_cmd.stdout }}"
- name: An other task to play on targets
debug:
msg: "Task played on {{ inventory_hostname }}"
Rather than vars at play level, you can store this information in your inventory for all hosts or for a specific group, even for each host. See how to organise your group and host variables if you want to use that feature.
I'm new to all the Ansible stuff. So most of the time I'm in "Trial and Error"-Mode.
Now I'm facing a challenge with a playbook and I do not know to look further.
The main task of this playbook should be to get a "Show run" from a Cisco Device and save this in a text file on a backup server (which is a remote server).
The only task, which is not working, is the Backup Task.
Here is my playbook:
- hosts: IOSGATEWAY
gather_facts: no
connection: local
tasks:
- name: GET CREDENTIALS
include_vars: path/to/all/all.yml
- name: DEFINE CONNECTION TO GW
set_fact:
connection:
host: "{{ inventory_hostname }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
- name: GET SHOW RUN
ios_command:
provider: "{{ connection }}"
commands:
- show run
register: show_run
- name: SAVE TO BACKUP SERVER
copy:
content: "{{ show_run.stdout[0] }}"
dest: "path/to/Directory/{{ inventory_hostname }}.txt"
delegate_to: BACKUPSERVER
Can someone hint me in the right direction?
You set connection: local for the playbook, so everything you do is executed locally (which is correct for ios_... modules, but not what you actually want for copy module).
I'd recommend to define ansible_connection variable in your inventory per group of hosts/devices, so Ansible will use local connection for your ios devices, and ssh for backup-server.