Ansible: Accessing standard user home path while being logged in as root - ansible

I am trying to copy a file using ansible as sudo from remote user home directory ~/bin to usr/local.
This works:
- name: Copy Folder Working
become_user: "{{ ansible_facts.env.SUDO_USER }}"
become: yes
command: sudo mv ~/bin /usr/local/bin
But this doesn't:
- name: Copy Folder Permission Error
become_user: "{{ ansible_facts.env.SUDO_USER }}"
become: yes
copy:
remote_src: yes
src: ~/bin
dest: /usr/share
So even though I use become: yes ansible apparently doesn't copy using sudo. I do get the same permission error as if I would try to copy the file manually without using sudo. The paths seem to be correct. How can I avoid that or is sticking to the first self implemented solution the way to go?
Additional Info
This is part of a playbook that is executed on a server. That server can have ubuntu as its default user, but also any other default user of a Linux distribution is possible. Therefore the default user needs to be read from ansible_facts as I try to do by using "{{ ansible_facts.env.SUDO_USER }}". While SUDO_USER sounds odd this seems to work as I get the expected permission error:
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"_original_basename": null,
"attributes": null,
"backup": false,
"checksum": null,
"content": null,
"dest": "/usr/share",
"directory_mode": null,
"follow": false,
"force": true,
"group": null,
"local_follow": null,
"mode": null,
"owner": null,
"remote_src": true,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/home/ubuntu/bin",
"unsafe_writes": false,
"validate": null
}
},
"msg": "Destination /usr/share not writable"
}
There's also the issue I need to fix that it merges instead of copying, but there are multiple possible solutions for that which I feel are unrelated (for example copying the files from ~/bin/* instead and deleting the folder.

Related

Unable to create directory using ansible playbook

Steps to reproduce-
Ensure you have a VM running in VirtualBox (RHEL8)
Create a ansible galaxy collection
ansible-galaxy collection init myorg.mycollection
Navigate into the Roles Directory and execute following command
ansible-galaxy role init myrole
Add following code in main.yml inside the roles/myrole/tasks/main.yml
---
# tasks file for myrole
- name: Create /home/{{username}}/.ssh, if not exist
file:
path: "/home/{{username}}/.ssh"
state: directory
Create a play.yml file with following content
---
- name: Configure Development Workstation
hosts: my_user_name-rhel8
connection: local
debugger: on_failed
gather_facts: no
become_user: my_user_name
vars:
uname: "my_user_name"
roles:
- role: myorg.mycollection.myrole
username: "{{ uname }}"
build your collection with following command
ansible-galaxy collection build myorg/mycollection
install your collection with following command
ansible-galaxy collection install ./myorg-mycollection-1.0.0.tar.gz --force
run ansible playbook with following command
ansible-playbook play.yml -i my_user_name-rhel8, --ask-become-pass -vvv
Expected Result: The /home/username/.ssh folder should be created successfully.
Actual Result: The ansible fails with following result
[WARNING]: Platform darwin on host my_user_name-rhel8 is using the discovered Python interpreter at /usr/bin/python, but future
installation of another Python interpreter could change the meaning of that path. See
https://docs.ansible.com/ansible/2.11/reference_appendices/interpreter_discovery.html for more information.
fatal: [my_user_name-rhel8]: FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"invocation": {
"module_args": {
"_diff_peek": null,
"_original_basename": null,
"access_time": null,
"access_time_format": "%Y%m%d%H%M.%S",
"attributes": null,
"follow": true,
"force": false,
"group": null,
"mode": null,
"modification_time": null,
"modification_time_format": "%Y%m%d%H%M.%S",
"owner": null,
"path": "/home/my_user_name/.ssh",
"recurse": false,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "directory",
"unsafe_writes": false
}
},
"msg": "There was an issue creating /home/anchavan as requested: [Errno 45] Operation not supported: '/home/my_user_name'",
"path": "/home/my_user_name/.ssh"
}

maven_artifact comman of ansible is not working in Rundeck

Objective:
Download the artifact jar from nexus to a Target server. I have already tried an attempt to download the jar via URL and not via maven_artifact, which was successful anyway. But wanted to try with maven_artifact command
Code in playbook:
---
- name: TestOfMavenArtifactCommand
hosts: Target_Server_Where_the_Jar_needs_to_be_downloaded
tasks:
- maven_artifact:
group_id: ab.cdef.group
artifact_id: artifact_name
repository_url: 'https://nexusUrl/repository/maven-snapshots'
dest: /folder/artifact_name.jar
username: user
state: present
mode: 0755
When I run the Rundeck job, it fails. The error is like:
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible-playbook
python version = 2.7.5 (default, Jun 11 2019, 12:19:05) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
Using /etc/ansible/ansible.cfg as config file
Parsed /var/lib/rundeck/work/zxcv-folder-services/scpr-inventory_dev/dev/hosts.yml inventory source with ini plugin
[WARNING]: Ignoring invalid attribute: username
[WARNING]: Ignoring invalid attribute: artifact_id
[WARNING]: Ignoring invalid attribute: dest
[WARNING]: Ignoring invalid attribute: state
[WARNING]: Ignoring invalid attribute: mode
[WARNING]: Ignoring invalid attribute: repository_url
[WARNING]: Ignoring invalid attribute: group_id
PLAYBOOK: main.yml *************************************************************
1 plays in main.yml
fatal: [server00.companyxxx.dev ]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"artifact_id": null,
"attributes": null,
"backup": null,
"classifier": "",
"content": null,
"delimiter": null,
"dest": null,
"directory_mode": null,
"extension": "jar",
"follow": false,
"force": null,
"group": null,
"group_id": null,
"keep_name": false,
"mode": null,
"owner": null,
"password": null,
"regexp": null,
"remote_src": null,
"repository_url": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "devsent",
"timeout": 10,
"unsafe_writes": null,
"username": null,
"validate_certs": true,
"version": "latest"
}
},
"msg": "group_id must be set"
}
to retry, use: --limit #/var/lib/rundeck/work/abcd-batch-services/scpr-servers-operations_dev/deploy-servers/tasks/main.retry
Followed the reference of this:
https://docs.ansible.com/ansible/latest/modules/maven_artifact_module.html
This appears to be just an indentation error. Try:
tasks:
- maven_artifact:
group_id: ab.cdef.group
artifact_id: artifact_name
repository_url: 'https://nexusUrl/repository/maven-snapshots'
dest: /folder/artifact_name.jar
username: user
state: present
mode: 0755

ansible user module always shows changed

I'm struggling to properly use ansible's user module. The problem is every time I run my playbook, the users I created always show as changed, even if I have already created them.
I found other people with the same issue here, though I am struggling to actually fix it based on the github thread. Probably the most helpful comment that I didn't understand 👇
I can confirm that it only looked like a bug - adding the append
option to two tasks made it so that they're not always undoing the
work of the other, and fixed the permanently changed trigger. I did
not need to add "group:"
This is what my playbook looks like:
- name: Generate all users for the environment
user:
createhome: yes
state: present # to delete
name: "{{ item.user }}"
groups: "{{ 'developers' if item.role == 'developer' else 'customers' }}"
password: "{{ generic_password | password_hash('sha512') }}"
append: yes
with_items:
- "{{ users }}"
My intention is the have every user belong to their own private group (User Private Groups) but also have a developer belong to the developers group. With the current configuration currently it works, with the problem being ansible always reports the user as "changed". I'll then add the developers group to the sudoers file; hence I'd like to add the user to the developers group.
e.g.
vagrant#ubuntu-bionic:/home$ sudo su - nick
$ pwd
/home/nick
$ touch file.txt
$ ls -al
-rw-rw-r-- 1 nick nick 0 Jul 3 12:06 file.txt
vagrant#ubuntu-bionic:/home$ cat /etc/group | grep 'developers'
developers:x:1002:nick,ldnelson,greg,alex,scott,jupyter
Here is the verbose output running against vagrant locally for one of the users:
changed: [192.168.33.10] => (item={'user': 'nick', 'role': 'developer', 'with_ga': False}) => {
"append": true,
"changed": true,
"comment": "",
"group": 1004,
"groups": "developers",
"home": "/home/nick",
"invocation": {
"module_args": {
"append": true,
"comment": null,
"create_home": true,
"createhome": true,
"expires": null,
"force": false,
"generate_ssh_key": null,
"group": null,
"groups": [
"developers"
],
"hidden": null,
"home": null,
"local": null,
"login_class": null,
"move_home": false,
"name": "nick",
"non_unique": false,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"password_lock": null,
"remove": false,
"seuser": null,
"shell": null,
"skeleton": null,
"ssh_key_bits": 0,
"ssh_key_comment": "ansible-generated on ubuntu-bionic",
"ssh_key_file": null,
"ssh_key_passphrase": null,
"ssh_key_type": "rsa",
"state": "present",
"system": false,
"uid": null,
"update_password": "always"
}
},
"item": {
"role": "developer",
"user": "nick",
"with_ga": false
},
"move_home": false,
"name": "nick",
"password": "NOT_LOGGING_PASSWORD",
"shell": "/bin/sh",
"state": "present",
"uid": 1002
}
Should be unrelated, but I am adding some to the developers group as I intend to grant sudo access for certain commands.
generic_password | password_hash('sha512') is not idempotent. Salt of the hash changes each time the function password_hash runs.
To make it idempotent, either run it with a specific salt
- name: Generate all users for the environment
user:
password: "{{ generic_password | password_hash('sha512', 'mysalt') }}"
, or update the password on_create only
- name: Generate all users for the environment
user:
update_password: on_create
(, or register the return values and declare changed_when).
Consider external management of passwords e.g. Ansible Vault or Passwordstore. There is a lookup plugin for passwordstore. See ansible-doc -t lookup passwordstore. See also my implementation of Passwordstore.

Ansible Symbolic Link Task Role Failure

I am new to ansible and executing the following ansible task:
- name: Create symbolic links
file:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
owner: "{{ jboss_usr }}"
group: "{{ jboss_grp }}"
state: link
with_items:
- { src: "/apps/etc/jboss", dest: "/etc/jboss" }
- { src: "/apps/var/log/jboss", dest: "/var/log/jboss" }
And I got the following error:
2018-12-21 21:27:23,469 p=15185 u=ex_sam | failed: [hostname.x] (item={u'dest': u'/etc/jboss', u'src': u'/apps/etc/jboss'}) => {
"changed": false,
"invocation": {
"module_args": {
"attributes": null,
"backup": null,
"content": null,
"delimiter": null,
"dest": "/etc/jboss",
"diff_peek": null,
"directory_mode": null,
"follow": true,
"force": true,
"group": "jboss",
"mode": null,
"original_basename": null,
"owner": "jboss",
"path": "/etc/jboss",
"recurse": false,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/apps/etc/jboss",
"state": "link",
"unsafe_writes": null,
"validate": null
}
},
"item": {
"dest": "/etc/jboss-as",
"src": "/apps/etc/jboss"
},
"msg": "Error while linking: [Errno 13] Permission denied",
"path": "/etc/jboss-as",
"state": "absent"
}
I am trying to find out why the symbolic link creation failed.
I read the following:
https://docs.ansible.com/ansible/latest/reference_appendices/common_return_values.html#id6
I says the "changed" attribute is a boolean indicating if the task had to make changes.
But, there are lots of null parameters in the invocation:module_args elements of the json
Does that mean the values are really “null” or they are being set to a default value?
I have looked into the ansible documentation and I am not sure if the invocation:module_args null values are representative of the outcome of the trying to create the symbolic link i.e. are the null input or output of executing the tasks.
I think some of the nulls are defaults, but will appreciate some helpful comments on the possible relation between the json returned in my ansible error log and the actual "Error while linking: [Errno 13] Permission denied".
Thanks all for you anticipated help.
I think, is the permissions on /etc. So probably you need to add the option:
become: true
To your task.

Ansible inventory: replace issues

I have an inventory like this:
[all:vars]
env_cidr_prefix='172.25'
antother_var="foo"
[VPN_SERVER]
vpn-server ansible_host="{{ env_cidr_prefix}}.0.1"
During ansible playbook, the inventory holds only private ip address.
I wan't to replace the content of "ansible_host=" with the public ip
Example of a playbook:
- name: grab the vpn public_ip
set_fact: PUBLIC_IP="{{ instance_eip.public_ip }}"
when: inventory_hostname |search("vpn-server")
- name: update inventory with the vpn public ip
replace:
path: "{{ inventory_file }}"
regexp: "{{ ansible_host }}"
replace: "{{ PUBLIC_IP }}"
when: inventory_hostname |search("vpn-server")
if
ansible_host="172.25.0.1"
the replace module will work correctly.
but this fails
ansible_host="{{ env_cidr_prefix}}.0.1"
debug output:
ok: [vpn-server] => {
"changed": false,
"invocation": {
"module_args": {
"after": null,
"attributes": null,
"backup": false,
"before": null,
"content": null,
"delimiter": null,
"directory_mode": null,
"encoding": "utf-8",
"follow": false,
"force": null,
"group": null,
"mode": null,
"owner": null,
"path": "/home/toluna/ansible/openvpn/env.properties",
"regexp": "172.25.0.11",
"remote_src": null,
"replace": "1.1.1.1",
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"unsafe_writes": null,
"validate": null
}
},
"msg": ""
}
Note, I cant use the add_host module since the playbooks are running in different stages
Is there a better way to do it ?
Thanks
OK, after testing it I guess I understand what are you trying to achieve.
Several parts here:
The inventory file is like this:
vpn-server ansible_host="{{ env_cidr_prefix}}.0.1"
And you are trying to replace 172.25.0.1 literal which doesn't exist in your file. You have "{{ env_cidr_prefix}}.0.1" and not 172.25.0.1.
Options:
If you want to replace that way, you can use a Jinja2 file in your role, replace the variable and the inventory file the same way you are trying.
Override the /etc/hosts file of your Jenkins (I really don't like too much) and play with the host name.
Play with your hosts variable in the playbook like:
Hosts Playbooks:
- name : Test
hosts: "{{ variable_vpn_ip | default('vpn-server') }}"
And call it reading from a variable that you will change ad-hoc or:
ansible-playbook play.yml -e "variable_vpn_ip=172.25.0.1"

Resources