I have two web servers behind a load balancer that use Let's Encrypt for automatic SSL. web1 will handle the creation and renewal of the SSL keys and then synchronize those keys onto web2. Trying to use a variant of this isn't working for me:
- name: Sync SSL files from master to slave(s)
synchronize:
src: "{{ item }}"
dest: "{{ item }}"
when: inventory_hostname != 'web1'
delegate_to: web1
with_items:
- /etc/nginx/ssl/letsencrypt/
- /var/lib/letsencrypt/csrs/
- /var/lib/letsencrypt/account.key
- /etc/ssl/certs/lets-encrypt-x3-cross-signed.pem
That returns an immediate error of:
Permission denied (publickey).\r\nrsync: connection unexpectedly closed (0 bytes received so far) [Receiver]\nrsync error: unexplained error (code 255) at io.c(605) [Receiver=3.0.9]\n"
Why isn't the ssh forwarding working once ansible logs into web1 or web2? When I execute this manually, it works fine:
ssh -A user#web1
#logged into web1 successfully
ssh user#web2
#logged into web2 successfully
Here is my ansible.cfg
[defaults]
filter_plugins = ~/.ansible/plugins/filter_plugins/:/usr/share/ansible_plugins/filter_plugins:lib/trellis/plugins/filter
host_key_checking = False
force_color = True
force_handlers = True
inventory = hosts
nocows = 1
roles_path = vendor/roles
[ssh_connection]
ssh_args=-o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r -o ForwardAgent=yes
pipelining = True
retries = 1
What I think is happening is I am trying to copy contents from a folder with root only permissions. So sudo is being used, which switches my user and why I get a permission denied, b/c the SSH key is associated with non-root. So it seems I need a way to access contents of a root only folder and send it across with a regular user. I might create a few steps to copy and change permissions with root, then sync with non-root, and use sudo to fix permissions on the remote host.
Seems like a lot of steps, but not sure if synchronize can handle my use case.
UPDATED: Added more relevant error
Related
I'm trying to move everything under /opt/* to a new location on the remote server. I've tried this using command to run rsync directly, as well as using both the copy and the sychronize ansible module. In all cases I get the same error message saying:
"msg": "rsync: link_stat \"/opt/*\" failed: No such file or directory
If I run the command listed in the "cmd" part of the ansible error message directly on my remote server it works without error. I'm not sure why ansible is failing.
Here is the current attempt using sychronize:
- name: move /opt to new partition
become: true
synchronize:
src: /opt/*
dest: /mnt/opt/*
delegate_to: "{{ inventory_hostname }}"
You should skip the wildcards that is a common mistake:
UPDATE
Thanks to the user: # Zeitounator, I managed to do it with synchronize.
The advantage of using synchronize instead of copy module is performance, it's much faster if you have a lot of files to copy.
- name: move /opt to new partition
become: true
synchronize:
src: /opt/
dest: /mnt/opt
delegate_to: "{{ inventory_hostname }}"
So basically the initial answer was right but you needed to delete the wildcards "*" and the slash on dest path.
Also, you should add the deletion of files on /opt/
I want to use the ansible module synchronize with using a private_key.
When issuing the following command, everything works fine:
localuser$ rsync -rltDvzu --delete -e ssh . remoteuser#rsync.cloud.com:/users/remoteuser/
Here is my playbook the achieve the same, executing as root:
- name: Synchronization of src on the control machine to dest on the remote hosts
synchronize:
src: /raid5/Pictures/
dest: rsync://remoteuser#rsync.cloud.com:/users/remoteuser/
recursive: yes
private_key: /home/localuser/.ssh/id_rsa
set_remote_user: no
copy_links: no
times: yes
checksum: yes
rsync_opts: -e "ssh"
Doing this, the password prompt shows up.
I've tried the following:
toggling set_remote_user --> Password prompt shows up
set_fact, ansible_user to localuser or remoteuser --> Password prompt shows up
extending rsync_opts with -i and the path to my private key --> Error message: No such file or directory
UPDATE TO THE PLAYBOOK
become: yes
become_user: localuser
still the password prompt shows up.
It seems that in the first case, you are not root, and in the second, you are. So the ssh bi-key used is not the same in this 2 cases, is it ?
I'm trying to make a run from my ansible master to a host (lets call it hostclient) which requires performing something into another host (let's call it susemanagerhost :) ).
hostclient needs ansible_ssh_common_args with proxycommand fullfilled, while susemanager host needs no ansible_ssh_common_args since its a direct connection.
So I thought I could use delegate_to, but the host called hostclient and the host called susemanagerhost have different values for the variable ansible_ssh_common_args.
I thought I could change the value of ansible_ssh_common_args inside the run with set_fact of ansible_ssh_common_args to ansible_ssh_common_args_backup (because I want to recover the original value for the other standard tasks) and then ansible_ssh_common_args to null (the connection from the ansible master to susemanager host is a direct connection with no proxycommand required) but it is not working.
It seems like its still using the original value for ansible_ssh_common_args.
ansible_ssh_common_args is generally used to execute commands 'through' a proxy server:
<ansible system> => <proxy system> => <intended host>
The way you formulated your question you won't need to use ansible_ssh_common_args and can stick to using delegate_to:
- name: execute on host client
shell: ls -la
- name: execute on susemanagerhost
shell: ls -la
delegate_to: "root#susemanagerhost"
Call this play with:
ansible-playbook <play> --limit=hostclient
This should do it.
Edit:
After filing a bug on github the working solution is to have:
ansible_ssh_common_args: '-o StrictHostKeyChecking=no -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ansible#{{ jumphost_server }}"'
In host_vars or group_vars.
Followed by a static use of delegate_to:
delegate_to: <hostname>
hostname should be the hostname as used by ansible.
But not use:
delegate_to: "username#hostname"
This resolved the issue for me, hope it works out for you as well.
i have an shh key from /home/renz/.shh/id_rsa.pub. I want to add this to my target host in /root/.shh/authorized_keys through ansible. I tried this but didn't work.
---
- hosts: snapzio
tasks:
- name: Set authorized key took from file
authorized_key:
user: master
state: present
key: "{{ lookup('file', '/home/renz/.ssh/id_rsa.pub') }}"
path: /root/.ssh/authorized_keys
because in the first place, i cannot communicate with the host because my key is not in the authorized keys. I think this idea makes sense if i want to communicate to many hosts. instead of just manually copy and paste the key.
As others have mentioned, if the account you use with Ansible doesn't have a SSH key installed, you'll have to fall back to using password authentication. Assuming InstallMyKey.yml is your playbook, you could run something like this:
ansible-playbook InstallMyKey.yml --ask-become-pass
You'll need to add the remote_user: root line to your YML between the hosts: and tasks: lines, then type in the root password.
Assuming the playbook succeds and everything else in the root SSH settings are correct, your next run of a playbook should use the renz ssh key and get on without a password.
I want to have a set of git repos with base application config playbooks in them such that all I need to do with any VM, regardless of its name or role or how long its been around, to get the base config installed is to run an ansible-pull command against this repo and I get a ready to use instance. The problem that I run into is that if I have a playbook local.yml that's set like so:
- hosts: localhost
connection: local
gather_facts: yes
user: root
[...]
or like so:
- hosts: all
connection: local
gather_facts: yes
user: root
[...]
I keep getting the following error:
# ansible-pull -d /tmp/myrepo -U 'http://mygithost/myrepo'
Starting ansible-pull at 2015-06-09 15:04:05
localhost | success >> {
"after": "2e4d1c637170f95dfaa8d16ef56491b64a9f931b",
"before": "d7520ea15baa8ec2c45742d0fd8e209c293c3487",
"changed": true
}
**ERROR: Specified --limit does not match any hosts**
The only way I've been able to avoid the error is to create an explicit inventory file with an explicit groupname and explicit hostnames that is then referred to with the '-i' flag like so:
# cat /tmp/myrepo/myinventory
[mygroup]
myhost1
myhost2
# cat /tmp/myrepo/local.yml
- hosts: mygroup
connection: local
gather_facts: yes
user: root
[...]
# ansible-pull -d /tmp/myrepo -i myinventory -U 'http://mygithost/myrepo'
But I don't want that, I want any host, no matter whether its name or role is known to be able to run an ansible-pull against the repo and run the playbook without having to explicitly configure the name of the host into the inventory. How do I do that?
Here is the workflow I use for ansible-pull within my VMs:
In the base VM I put a file named hosts at /etc/ansible:
# cat /etc/ansible/hosts
localhost ansible_connection=local
My local.yaml starts with
- hosts: localhost
gather_facts: yes
user: root
...
Now I can use ansible-pull without specifying a hosts file.