Creating a directory on Ubuntu using ansible playbook - amazon-ec2

I'm trying to create an ec2 instance through ansible playbook and then run a few commands on it. The ec2 instance is created sucessfully and also ssh'd into it. Now, when I try to run this command:
- name: Create scripts directory
ansible.builtin.file:
path: /scripts
state: directory
mode: '0755'
An error is generated:
{"changed": false, "msg": "There was an issue creating /scripts as requested: [Errno 30] Read-only file system: '/scripts'", "path": "/scripts"}
I have specified the mode as 0755 which is read write execute permissions. So why is it giving an error, the same error even for 0777? I'm new to ansible and Ubuntu in general.

you have to add the option become to use the admin privilege (your user have to be sudo er...)
- name: Create scripts directory
ansible.builtin.file:
path: /scripts
state: directory
mode: '0755'
become: yes
here you create a folder on level root, are you sure to do that?
To specify a password for sudo, run ansible-playbook with --ask-become-pass (-K for short). If you run a playbook utilizing become and the playbook seems to hang, most likely it is stuck at the privilege escalation prompt. Stop it with CTRL-c, then execute the playbook with -K and the appropriate password.

Your Ansible user doesn't have the required privileges to create a directory at that level. Either use another user (bad option) or make sure the user has sudo capabilities and use become: yes to execute the command as sudo

Related

How to become a user with root priveleges in ansible

I am setting up a playbook that automatically configures my workstation. This will hopefully allow me to quickly install linux somewhere and automatically have all the resources I need.
One of the steps is installing homebrew and I cannot figure out how to do it.
I have created this playbook
- hosts: localhost
become: yes
become_user: myUser
tasks:
- name: Download homebrew install script from source
get_url:
url: https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh
dest: ~/Downloads/install_homebrew.sh
mode: 'u+rwx'
- name: Install homebrew
shell: ~/Downloads/install_homebrew.sh
and run it with ansible-playbook myplaybook.yaml.
However, when I execute it, there is a permission denied error. Apparently this is because of how the copy-module works (here). So I thought I'd just run the sudo ansible-playbook myplaybook.yaml instead. This leads to the exact same permission error. I guess this is because I have become_user: myUser.
However, when i remove become_user, I obviously get another error Destination /root/Downloads does not exist because my destination is coded to the users download-directory.
So how can I execute the playbook as the user myUser but with root privileges? This would allow me to access the root-stuff but still refer to my home-directory. In theory this should be possible since I can run
sudo ls -a /root && ls ~/
and get both the content of the root-folder and of my home directory. But I don't know how to do this in ansible.

Ansible Playbook Error: The powershell shell family is incompatible with the sudo become plugin

I am working on a simple playbook that will ultimately be able to start/stop/restart windows services and I ran into an issue:
fatal: [mspdbwn1w01]: FAILED! => {
"msg": "The powershell shell family is incompatible with the sudo become plugin"
}
Below is the playbook:
- name: Add Host
hosts: localhost
connection: local
strategy: linear
tasks:
- name: Add Temp Host
add_host:
name: "{{ win_client }}"
group: temp
- name: Target Server
connection: winrm
hosts: temp
tasks:
- name: Stop a service
win_service:
name: "{{ service }}"
state: stopped
Google hasn't been much help, and I've tried everything I could find, every variation of become*.
I don't know if it matters, but due to the nature of the environment I work in, I have 2 separate users to log into *nix hosts vs. windows hosts.
Any assistance or guideance would be greatly appreciated.
Your system seems to use sudo as the default become method, which is not compatible with PowerShell. For Windows (and PowerShell), you can use runas as the become method. Add:
become_method: runas
to your playbook or task. You can get a list of all available become methods with:
ansible-doc -t become -l
Example:
doas Do As user
dzdo Centrify's Direct Authorize
enable Switch to elevated permissions on a network device
ksu Kerberos substitute user
machinectl Systemd's machinectl privilege escalation
pbrun PowerBroker run
pfexec profile based execution
pmrun Privilege Manager run
runas Run As user
sesu CA Privileged Access Manager
su Substitute User
sudo Substitute User DO
You can view the documentation for a particular become method with:
ansible-doc -t become runas
If you still get erros, pay attention to the error message, as it most probably is a different one. Using privilege escalation requires the definition of a username and a password for this purpose, for example.

Issue with ansible copying file

I'm trying to provision a local kubernetes cluster using Vagrant (v2.1.2) and VirtualBox (v5.2.20).
My Vagrantfile uses the ansible_local provisioner to run some ansible playbooks to provision a K8s cluster using kubeadm.
This was all working perfectly a few months back when I ran this, but it's not working anymore. Not sure why it stopped working, but the ansible playbook for the master node fails when I trying to copy the kube config to the vagrant users home dir.
The ansible task that fails is as follows:
- name: Copy kubeconfig for vagrant user
copy:
src: /etc/kubernetes/admin.conf
dest: /home/vagrant/.kube/
owner: vagrant
group: vagrant
This is causing the following error: fatal: [master1]: FAILED! => {"msg": "an error occurred while trying to read the file '/etc/kubernetes/admin.conf': [Errno 13] Permission denied: '/etc/kubernetes/admin.conf'"}
The src file does exist. If I ssh into the VM after the failure, I can copy the file with sudo cp /etc/kubernetes/admin.conf /home/vagrant/, but the failure above causes the vagrant provisioning to fail/halt.
FYI., I've tried a few combinatons of things at the play and task levels e.g. become: true, remote_user: root e.g.
---
- hosts: all
become: true
tasks:
...
... but to no avail.
permissions on admin.conf are as follows:
vagrant#master1:/etc/kubernetes$ ls -al admin.conf
-rw------- 1 root root 5453 Aug 5 14:07 admin.conf
Full master-playbook.yml can be found here.
How do I get ansible to copy the file?
Quoting from copy
src Local path to a file to copy to the remote server.
The play failed because the user who is running the ansible playbook can't read the file at the controller (local path)
permission denied: '/etc/kubernetes/admin.conf'
Use remote_src: yes
remote_src If yes it will go to the remote/target machine for the src
- name: Copy kubeconfig for vagrant user
copy:
remote_src: yes
src: /etc/kubernetes/admin.conf
dest: /home/vagrant/.kube/
owner: vagrant
group: vagrant
From the comment, this seems to be what you want to do
"The src file does exist. If I ssh into the VM after the failure, I can copy the file with sudo cp /etc/kubernetes/admin.conf /home/vagrant/"
This should work if the remote_user at VM is allowed to sudo su and escalation is properly set
- hosts: all
become: yes
become_user: root
become_method: sudo
It would be not enough to allow the remote_user sudo cp only. See Can’t limit escalation to certain commands
Privilege escalation permissions have to be general. Ansible does not always use a specific command to do something but runs modules (code) from a temporary file name which changes every time. If you have ‘/sbin/service’ or ‘/bin/chmod’ as the allowed commands this will fail ...

ansible playbook: Cannot launch a service as root

I've been banging my head on this one for most of the day, I've tried everything I could without success, even with the help of my sysadmin. (note that I am not at all an ansible expert, I've discovered that today)
context: I try to run implement continuous integration of a java service via gitlab. a pipeline will, on a push, run tests, package the jar, then run an ancible playbook to stop the existing service, replace the jar, launch the service again. We have that for the production in google cloud, and it works fine. I'm trying to add an extra step before that, to do the same on localhost.
And I just can't understand why ansible fails to do a "sudo service XXXX stop|start" . All I got is
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Sorry, try again.\n[sudo via ansible, key=nbjplyhtvodoeqooejtlnhxhqubibbjy] password: \nsudo: 1 incorrect password attempt\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}
Here is the the gitlab pipeline stage that I call :
indexer-integration:
stage: deploy integration
script:
- ansible-playbook -i ~/git/ansible/inventory deploy_integration.yml --vault-password-file=/home/gitlab-runner/vault.txt
when: on_success
vault.txt contains the vault encryption password. Here is the deploy_integration.yml
---
- name: deploy integration saleindexer
hosts: localhost
gather_facts: no
user: test-ccc #this is the user that I created as a test
connection: local
vars_files:
- /home/gitlab-runner/secret.txt #holds the sudo password
tasks:
- name: Stop indexer
service: name=indexer state=stopped
become: true
become_user: root
- name: Clean JAR
become: true
become_user: root
file:
state: absent
path: '/PATH/indexer-latest.jar'
- name: Copy JAR
become: true
become_user: root
copy:
src: 'target/indexer-latest.jar'
dest: '/PATH/indexer-latest.jar'
- name: Start indexer
service: name=indexer state=started
become: true
become_user: root
the user 'test-ccc' is another user that I created ( part of the group root and in the sudoer file) to make sure it was not an issue related to the gitlab-runner user ( and because apparently no one here can remembers the sudo password of that user xD )
I've try a lot od thing, including
shell: echo 'password' | sudo -S service indexer stop
that works in command line. But if executed by ansible, all I got is a prompt message asking me to enter the sudo password
Thanks
edit per comment request : The secret.txt has :
ansible_become_pass: password
When using that user in command line (su user / sudo service start ....) and prompted for that password, it works fine. The problem I believe is that either ansible always prompts for password, or the password is not properly passed to the task.
The sshd_config has a line 'PermitRootLogin yes'
ok, thanks to a reponse(now deleted) from techraf, I noticed that the line
user: test-ccc
is actually useless, everything was still run by the 'gitlab-runner' user. So I :
put all my action in a script postbuild.sh
add gitlab-runners to the sudoers and gave the nopassword for that script
gitlab-runner ALL=(ALL) NOPASSWD:/home/PATH/postbuild.sh
removed everrything about passing the password and the secret from the ansible task, and used instead :
shell: sudo -S /home/PATH/postbuild.sh
So that works, the script is executed, service is stop/start. I'll mark this as answered, even though using service: name=indexer state=started and giving NOPASSWD:ALL for the user still caused an error (the one in my comment on the question ) . If anyone can shed light on that in the comment ....

Ansible : Not able to switch user from remote machine

I am new to Ansible. Trying to copy some files to remote machine.
I am able to copy to remote server's tmp folder, but not able to copy to a particular users folder.
I think it is possible if we can switch to that particular user. But I am not able to do so using playbook.
Please help me on this.
Regards,
KP
This is a permission issue. The user which you use to connect to the host does not have permissions to write to that other users folder.
If you have access to that users account (e.g. your ssh key is accepted) you can simply define the user per task through remote_user:
- copy: src=...
dest=...
remote_user: <SET_OWNER_HERE>
If you do not have access, you can use the sudo flag to execute a task with root permissions. But make sure you set the permissions correctly or the user might not be able to read/write those files:
- copy: src=...
dest=...
owner=<SET_OWNER_HERE>
group=<SET_GROUP_HERE>
mode=0644
sudo: yes
Also, you can define the username as which the sudo command is executed with sudo_user:
- copy: src=...
dest=...
sudo: yes
sudo_user: <SET_OWNER_HERE>
If sudo requires a password from you, you have to provide it or the task will hang forever without any error message.
You can define this globally in the ansible.cfg:
ask_sudo_pass=True
Or pass the option when you call your playbook:
ansible-playbook ... --ask-sudo-pass

Resources