copy files to remote machine's /etc/systemd/ directory using ansible - ansible

I an new to ansible. I may be saying something which is completely wrong.
I created VM using KVM, both remote and local are running on Ubuntu 16.0.4
Now I configured ansible by creating a key as
ssh-keygen -t rsa -b 4096 -C "D...#192.168.111.113"
this created key and copied it to remote machine by
ssh-copy-id D...#192.168.111.113
now I tested ssh is working, it is working fine.
I added remote machine's address in /etc/ansible/hosts under [DDAS] group.
now I can ping to remote machine using ansible. Then I wrote Playbook to copy file. I is working fine to copy files to /home/Das1/ only. I mean, I can copy files to location which do not need root permission.
I want to copy these files to /etc/systemd/ directory instead of the /home/das1/. I changed dest in playbook but it gives permission related errors.
Any help is highly appreciated.
Thank
DAS

By default your playbook tasks execute under the context of the user you use to connect to the remote system. Ansible allows you to change the user you use to run a playbook or individual tasks. You can create a new user and give it privileges to the directory you mention or you can use the built-in root user.
To run your entire playbook as root for example put this at the top adjusting for whatever your actual hosts value is:
- hosts: 192.168.111.113
become: true
become_user: root
tasks:
...

Probably the /etc/systemd/ directory does not have "write" privilege for the user you are using.
check the permission for /etc/systemd/ with ls -lrt.

Related

Automate server setup with Ansible SSH keypairs fails without sshpass

I'm am using Ansible and want to automate my VPS & Homelab setups. I'm running into an issue, which is the initial connection.
If I have a fresh VPS that has never been used or logged into, how can I remotely configure the node from my laptop?
ansible.cfg
[defaults]
inventory = ./inventory
remote_user = root
host_key_checking = false
ansible_ssh_common_args = "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
inventory
[homelab]
0.0.0.0 <--- actual IP here
./playbooks/add_pub_keys.yaml
---
- hosts: all
become: yes
tasks:
- name: Install public key on remote node
authorized_key:
state: present
user: root
key: "{{lookup('file','~/.ssh/homelab.pub')}}"
Command
ansible-playbook playbooks/add_public_keys.yaml
Now, this fails with permission denied, which makes sense because there is nothing that would allow connection to the remote node.
I tried adding -ask-pass to the command:
ansible-playbook playbooks/add_public_keys.yaml -ask-pass
and typing in the root password, but that fails and says I need sshpass, which is not recommended and not readily available to install on Mac due to security. How should I think about this initial setup process?
When I get issues like this I try and replicate the problem using ansible ad-hoc commands and go back to basics. It helps to prove where the issue is located.
Are you able to run ansible ad-hoc commands against your remote server using the password?
ansible -i ip, all -m shell -a 'uptime' -u root -k
If you can't, something is up with the password or possible in the ansible.cfg.

Ansible authentication on Windows

I'm using Jenkins to run ansible playbook on windows host. I'm trying to do a very simple command like robocopy between NAS share and local directory on the windows. Problem is that all the time I'm receiving ACCESS DENIED(5). This is not true because user (domain\sysansible) on which I'm running ansible already have full rights. There is no problem when I start the same command on windows or any machine. I have noticed that when Jenkins is running the ansible playbook it is being recognized not as a domain\sysansible but as a member of local admin group windows_host\administrator which doesn't have right to NAS share (and they cannot have because only domain accounts are approved).
My inventory file looks as follow:
[application_host]
lizard ansible_host=windows_host.domain.companynet.net ansible_connection=winrm ansible_winrm_transport=kerberos ansible_user=sysansible#company.com ansible_password=***** ansible_port=5986 ansible_winrm_server_cert_validation=ignore
My ansible task is quite simple. It works when I have exchanged source to a local directory instead of \\nas-share\applications\app-home. I have also use somekind of variation of robocopy parameters but also they failed.
- name: Sync the contents of home directory to backup site, including subdirectories
win_command: robocopy \\nas-share\applications\app-home d:\application\backup\home-folder /E /w:5 /r:2 /log:D:\Applications\log.txt /XD \\nas-share\applications\app-home\artifacts
register: info_robocopy
tags:
- robo
The problem for me is why I'm being recognized as a local admin account group? How to be recognized on windows as domain\sysansible?
you are using basic authentification, it does not allow you to delegate credentials to next host you want to copy files to.
(see https://docs.ansible.com/ansible/latest/user_guide/windows_winrm.html)
Try to use CredSSP connections to your hosts or establish Kerberos connection with following (example) variable in inventory:
ansible_user = user#yourdomain
ansible_password = Passwordthere
ansible_port =5985
ansible_connection = winrm
ansible_winrm_transport = kerberos
ansible_winrm_message_encryption = auto
ansible_winrm_kerberos_delegation = yes

Ansible : Not able to switch user from remote machine

I am new to Ansible. Trying to copy some files to remote machine.
I am able to copy to remote server's tmp folder, but not able to copy to a particular users folder.
I think it is possible if we can switch to that particular user. But I am not able to do so using playbook.
Please help me on this.
Regards,
KP
This is a permission issue. The user which you use to connect to the host does not have permissions to write to that other users folder.
If you have access to that users account (e.g. your ssh key is accepted) you can simply define the user per task through remote_user:
- copy: src=...
dest=...
remote_user: <SET_OWNER_HERE>
If you do not have access, you can use the sudo flag to execute a task with root permissions. But make sure you set the permissions correctly or the user might not be able to read/write those files:
- copy: src=...
dest=...
owner=<SET_OWNER_HERE>
group=<SET_GROUP_HERE>
mode=0644
sudo: yes
Also, you can define the username as which the sudo command is executed with sudo_user:
- copy: src=...
dest=...
sudo: yes
sudo_user: <SET_OWNER_HERE>
If sudo requires a password from you, you have to provide it or the task will hang forever without any error message.
You can define this globally in the ansible.cfg:
ask_sudo_pass=True
Or pass the option when you call your playbook:
ansible-playbook ... --ask-sudo-pass

Does Vagrant create the /etc/sudoers.d/vagrant file, or does it need to be added manually to use ansible sudo_user?

Some ubuntu cloud images such as this one :
http://cloud-images.ubuntu.com/vagrant/precise/20140120/precise-server-cloudimg-amd64-vagrant-disk1.box
have the file /etc/sudoers.d/vagrant, with the content vagrant ALL=(ALL) NOPASSWD:ALL
Other boxes such as this one https://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_ubuntu-13.10_chef-provisionerless.box
doesn't have it, and as a result ansible commands with sudo_user don't work.
I can add the file with :
- name: ensure Vagrant is a sudoer
copy: content="vagrant ALL=(ALL) NOPASSWD:ALL" dest=/etc/sudoers.d/vagrant owner=root group=root
sudo: yes
I'm wondering if something Vagrant should be doing, because this task
will not be applicable when running the ansible playbook on a real (non vagrant) machine.
Vagrant requires privileged access to the VM, either using config.ssh.username = "root", or more commonly, via sudo. The Bento Ubuntu boxes currently configure it directly to /etc/sudoers.
I don't know what ansible's sudo_user does or means, but I guess your provisioning is overriding /etc/sudoers. In this case you really need to ensure you don't lose Vagrant's sudo access to the VM. Or build your own base box which uses sudoers.d.
As a side note, if you create a /etc/sudoers.d/ file, you should also set it's mode to 0440 or at least some older sudo versions refuse to apply it.

Simple Ansible playbook: Location of file for task to copy file to target servers?

As a proof of concept, I'm trying to create what is probably the simplest ansible playbook ever: copying a single file from the ansible server out to the server farm.
For completeness, ansible is installed properly. The ping module works great! LoL
The playbook for my POC reads:
---
- hosts: Staging
tasks:
- name: Copy the file
copy: src=/root/Michael/file.txt dest=/tmp/file.txt
When I run the command...
ansible-playbook book.yml
I get the following output (summarized)...
msg: could not find src=/root/Michael/file.txt
Various docs and web pages I've read said the path to the file can either be absolute or relative to the playbook. I've tried both without success.
Where should my file be to get ansible to copy it over to the target servers?
Thanks!
Found the error in my ways. The playbook and files were located in a directory that was not accessible to the account running the ansible-playbook command. So while the ansible-playbook process could read the playbook (i called the command from the directory where the file was located), the process could not read the directory where the file was located and as a result could not find the file.
The solution was to move the playbook and files into a directory that could be read by the account running ansible. After that, the playbook worked exactly as expected!

Resources