Using Ansible Playbook how to copy Java certs to hosts? Each host is having different JDK installed. I need to verify in all hosts which JDK is running and copy those certificate to all the hosts.
I have written the below playbook and the error that I'm getting. Please help me with figuring out what's wrong.
---
- hosts: test
vars:
pack1: /ngs/app/rdrt
pack2: /usr/java/jdk*
tasks:
- name: copy the files
copy:
src: "/Users/sivarami.rc/Downloads/Problem46218229/apple_corporate_root_ca.pem"
dest: "{{ pack1 }}"
- name: copy the files
copy:
src: "/Users/sivarami.rc/Downloads/Problem46218229/apple_corporate_root_ca2.pem"
dest: "{{ pack1 }}"
- name: copy the files
copy:
src: "/Users/sivarami.rc/Downloads/Problem46218229/ca-trust-check-1.0.0.jar"
dest: "{{ pack1 }}"
- name: Import SSL certificate to a given cacerts keystore
java_cert:
cert_path: "{{ pack1 }}/apple_corporate_root_ca.pem"
cert_alias: Apple_Corporate_Root_CA
cert_port: 443
keystore_path: "{{ pack2 }}/jre/lib/security/cacerts"
keystore_pass: change-it
executable: "{{ pack2 }}/bin/keytool"
state: present
- name: Import SSL certificate to a cacerts keystore
java_cert:
cert_path: "{{ pack1 }}/apple_corporate_root_ca2.pem"
cert_alias: Apple_Corporate_Root_CA2
cert_port: 443
keystore_path: "{{ pack2 }}/jre/lib/security/cacerts"
keystore_pass: changeit
executable: "{{ pack2 }}/bin/keytool"
state: present
- name: checking those files trusted or untrusted
shell: "{{ pack2 }}/bin/java -jar {{ pack1 }}/ca-trust-check-1.0.0.jar"
The error:
fatal: [c5147061#rn2-radart-lapp117.rno.apple.com]: FAILED! => {"changed": false, "cmd": "'/usr/java/jdk*/bin/keytool'", "msg": "[Errno 2] No such file or directory", "rc": 2}
fatal: [c5147061#rn2-radart-lapp121.rno.apple.com]: FAILED! => {"changed": false, "cmd": "'/usr/java/jdk*/bin/keytool'", "msg": "[Errno 2] No such file or directory", "rc": 2}
The following error is displayed:
"cmd": "'/usr/java/jdk*/bin/keytool'", "msg": "[Errno 2] No such file or directory"
As you can see, the keytool command can not be found in that location. You need to ensure that the path you're providing is actually there on the server.
Where you define the pack2 variable, you need to provide the full path instead of using a wildcard, e.g. like this:
vars:
pack2: /usr/java/jdk-1.8.0_67
Then ensure that this path exists on the remote machine, and your code should no longer show that error.
If the path is different on each node since you have a different version of Java on each node, here are some options:
Use host-specific variables for defining the path for each host, if you have that information.
Gather the information in a previous step, e.g. like here: Check Java version via Ansible playbook.
Check the JAVA_HOME environment variable to see if that is set.
I had the same error that the keytool utility was not found (on my PATH), but that was because I did not use the become_user which has the correct PATH value.
So my solution was to add the following line to my playbook:
become: yes
become_user: wls
(wls is the weblogic user but can be another system account depending on your needs)
I had the same error because keytool was link to a really old version of the JDK (version 6).
By using a more recent version (JDK version 11), I fixed this error.
Related
I am trying to do the following using Ansible 2.8.4 and awx:
Read some facts from Cisco IOS devices (works)
Put results into a local file using a template (works)
Copy/Move the resulting file to a different server
Since I have to use a different user to access IOS devices and servers, and the servers in question aren't part of the inventory used for the playbook, I am trying to achieve this using become_user and delegate_to.
The initial user (defined in the awx template) is allowed to connect to the IOS devices, while different_user can connect to servers using a ssh private key.
The playbook:
---
- name: Read Switch Infos
hosts: all
gather_facts: no
tasks:
- name: Gather IOS Facts
ios_facts:
- debug: var=ansible_net_version
- name: Set Facts IOS
set_fact:
ios_version: "{{ ansible_net_version }}"
- name: Create Output file
file: path=/tmp/test state=directory mode=0755
delegate_to: 127.0.0.1
run_once: true
- name: Run Template
template:
src: ios_firmware_check.j2
dest: /tmp/test/output.txt
delegate_to: 127.0.0.1
run_once: true
- name: Set up keys
become: yes
become_method: su
become_user: different_user
authorized_key:
user: different_user
state: present
key: "{{ lookup('file', '/home/different_user/.ssh/key_file') }}"
delegate_to: 127.0.0.1
run_once: true
- name: Copy to remote server
remote_user: different_user
copy:
src: /tmp/test/output.txt
dest: /tmp/test/output.txt
delegate_to: remote.server.fqdn
run_once: true
When run, the playbook fails in the Set up keys task trying to access the home directory with the ssh key:
TASK [Set up keys] *************************************************************
task path: /tmp/awx_2206_mz90qvh9/project/IOS/ios_version.yml:23
[WARNING]: Unable to find '/home/different_user/.ssh/key_file' in expected paths
(use -vvvvv to see paths)
File lookup using None as file
fatal: [host]: FAILED! => {
"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /home/different_user/.ssh/key_file"
}
I'm assuming my mistake is somehow related to which user is trying to access the /home/ directory on which device.
Is there a better/more elegant/working way of connecting to a different server using an ssh key to move around files?
I know one possibility would be to just scp using the shell module, but that always feels a bit hacky.
(sort of) solved using encrypted variables in hostvars with Ansible vault.
How to get there:
Encrypting the passwords:
This needs to be done from any commandline with Ansible installed, for some reason this can't be done in tower/awx
ansible-vault encrypt_string "password"
You'll be prompted for a password to encrypt/decrypt.
If you're doing this for Cisco devices, you'll want to encrypt both the ssh and the enable password using this method.
Add encrypted passwords to inventory
For testing, I put it in hostvars for a single switch, should be fine to put it into groupvars and use it on multiple switches as well.
ansible_ssh_pass should be the password to access the switch, ansible_become_pass is the enable password.
---
all:
children:
Cisco:
children:
switches:
switches:
hosts:
HOSTNAME:
ansible_host: ip-address
ansible_user: username
ansible_ssh_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
[encrypted string]
ansible_connection: network_cli
ansible_network_os: ios
ansible_become: yes
ansible_become_method: enable
ansible_become_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
[encrypted string]
Adding the vault password to tower/awx
Add a new credential with credential type "Vault" and the password you used earlier to encrypt the strings.
Now, all you need to do is add the credential to your job template (the template can have one "normal" credential (machine, network, etc.) and multiple vaults).
The playbook then automagically accesses the vault credential to decrypt the strings in the inventory.
Playbook to get Switch Infos and drop template file on a server
The playbook now looks something like below, and does the following:
Gather Facts on all Switches in Inventory
Write all facts into a .csv using a template, save the file on the ansible host
Copy said file to a different server using a different user
The template is configured with the user able to access the server, the user used to access switches with a password is stored in the inventory as seen above.
---
- name: Read Switch Infos
hosts: all
gather_facts: no
tasks:
- name: Create Output file
file: path=/output/directory state=directory mode=0755
delegate_to: 127.0.0.1
run_once: true
- debug:
var: network
- name: Gather IOS Facts
remote_user: username
ios_facts:
- debug: var=ansible_net_version
- name: Set Facts IOS
set_fact:
ios_version: "{{ ansible_net_version }}"
- name: Run Template
template:
src: ios_firmware_check.csv.j2
dest: /output/directory/filename.csv
delegate_to: 127.0.0.1
run_once: true
- name: Create Destination folder on remote server outside inventory
remote_user: different_username
file: path=/destination/directory mode=0755
delegate_to: remote.server.fqdn
run_once: true
- name: Copy to remote server outside inventory
remote_user: different_username
copy:
src: /output/directory/filename.csv
dest: /destination/directory/filename.csv
delegate_to: remote.server.fqdn
run_once: true
I like to use the synchronize module in Ansible to synchronize files from one server to some other servers via SSH
- name: Copy files to all servers
synchronize:
src: /source/path/
dest: "rsync://{{ ansible_nodename }}:/destination/path/"
delegate_to: src-host
So by default this module would use the inventory_name, but from the src-host the hostname is a different one. Only way I found so far was to use rsync://{{ ansible_nodename }}, but then it seems this is not happening via SSH anymore and I get a No route to host (113)\nrsync error: error in socket IO (code 10) at clientserver.c(128) [sender=3.1.0]
I tried also to overwrite inventory_hostname just for this one task, with no luck so far.
So for example I imagine something like this
- name: Copy custom config to all servers
synchronize:
src: /opt/app/dir/
dest: /opt/app/dir/
delegate_to: src-host
vars:
inventory_hostname: "{{ ansible_nodename }}"
But of course it fails when manipulating the inventory_hostname with following message
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: jinja2.exceptions.UndefinedError: "hostvars['{{ ansible_nodename }}']" is undefined
fatal: [my-host]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""}
If you want to copy something
from serverA
to serverB
using serverB's interface eth4 address, it should be like this:
- name: Copy custom config from serverA serverB over
synchronize:
src: "/path/to/source/on/serverA/machine"
dest: "rsync://{{ hostvars[serverB]['ansible_facts']['ansible_eth0']['ipv4']['address'] }} }}/path/where/to/put/files/on/serverB/machines"
delegate_to: serverA
If you want to template target string based on some hostname, you can use:
special variable inventory_hostname
(in case that your inventory hostname is not the real address) access other hosts variables e.g.:
{{hostvars[inventory_hostname]['ansible_facts']['ansible_fqdn']}}
I am deploying a CentOS machine and one among the tasks was to read a file that is rendered the Consul service which places it under /etc/sysconfig. I am trying to later read it in a variable using the lookup module but it is throwing an error below:
fatal: [ansible_vm1]: FAILED! => {"failed": true, "msg": "could not locate file in lookup: /etc/sysconfig/idb_EndPoint"}
But I am running the lookup task way below the point where the idb_EndPoint file is generated and also I looked it up manually logging in to verify the file was available.
- name: importing the file contents to variable
set_fact:
idb_endpoint: "{{ lookup('file', '/etc/sysconfig/idb_EndPoint') }}"
become: true
I also tried previlege escalations with another user become_user: deployuser along with become: true but didn't work still. Using the Ansible version 2.2.1.0.
All lookup plugins in Ansible are executed locally on the control machine.
Instead use slurp module:
- name: importing the file contents to variable
slurp:
src: /etc/sysconfig/idb_EndPoint
register: idb_endpoint_b64
become: true
- set_fact:
idb_endpoint: "{{ idb_endpoint_b64.content | b64decode }}"
I read a YAML file locally with the following playbook:
- name: Ensure the deploy_manifest var is defined and read deploy manifest
hosts: localhost
connection: local
gather_facts: False
tasks:
- assert:
that: deploy_manifest is defined
msg: |
Error: Must provide providers config path. Fix: Add '-e deploy_manifest=/path/to/manifest' to the ansible-playbook command
- name: Read deploy manifest
include_vars:
file: "{{ deploy_manifest }}"
name: manifest
register: manifest
- debug:
msg: "[{{ manifest.key }}]: {{ manifest.value }}"
with_dict: "{{ manifest.ansible_facts }}"
and then in the same playbook YAML file I run:
- name: Deploy Backend services
hosts: backend
remote_user: ubuntu
gather_facts: False
vars:
env: "{{ env }}"
services: "{{ manifest.ansible_facts }}"
tasks:
- include_role:
name: services_backend
when: backend | default(true) | bool
However it doesn't work because debug fails. It says that manifest is empty.
Which is the best way to read a YAML file or generally a configuration in a playbook and then have the variables passed in another playbook?
Your debug module doesn't say "that manifest is empty", it says the key manifest.key does not exist because it does not.
You registered a fact named manifest with:
register: manifest
You try to refer to a key of the above manifest named key and another key (!) named value:
msg: "[{{ manifest.key }}]: {{ manifest.value }}"
Please read Looping over Hashes chapter and acknowledge that (without using loop control) you refer to the iterated variable using item.
Please note that with name: manifest and register: manifest you read your vars file into manifest.ansible_facts.manifest.
I use Ansible as a provisioner for Vagrant. I have a task:
- name: download and unarchive redis binaries
unarchive:
src: "http://download.redis.io/redis-stable.tar.gz"
dest: "/tmp"
remote_src: True
but for some reasons I see an error in console when I run a vagrant provision:
"failed": true, "msg": "file or module does not exist: /Users/my-username/Projects/project-name/http:/download.redis.io/redis-stable.tar.gz"`
> ansible --version
ansible 2.1.2.0
Any ideas?
NB: look carefully for the error http:/download. Why is there only one backslash?
The syntax from your question works with Ansible 2.2.0.0 and later.
For Ansible 2.0 and 2.1 use:
- name: download and unarchive redis binaries
unarchive:
src: "http://download.redis.io/redis-stable.tar.gz"
dest: "/tmp"
copy: false
The double slash from your question was stripped, because the argument src was treated as a path to a local file (again, because old versions of Ansible required copy: false in addition to the URL).