Running Ansible playbook from a file in Rundeck - ansible

I am trying to run Ansible playbook stored in my local drive. I am using wsl 2 which is where I have installed Ansible and Rundeck.
Playbook path: /home/hannan/wslNodeRedProjects/ansible/myplaybook1.yml
On providing the correct location of the playbook I get the following errors:
ERROR! the playbook: /home/hannan/wslNodeRedProjects/ansible/myplaybook1.yml could not be found
*Failed: AnsibleNonZero: ERROR: Ansible execution returned with non zero code.
*
I am not sure why I am getting an error even after specifying the correct location.
I wanted to know if I am missing anything or should I need to provide other options like Ansible binaries directory path as well.

This error might indicate that the user establishing the local SSH connection to execute the playbook (default: rundeck) doesn't have executable permissions to the full playbook path.
This could be resolved by either using a user with the right executable permissions, or by granting executable permissions to the specific user with ACL, like so:
$ setfacl -R -m user:rundeck:x /path/to/playbook/
setfacl - set file access control lists.
-R, --recursive -
apply operations to all files and directories recursively.
-m, --modify -
modify the ACL of a file or directory. ACL entries for this operation must include permissions.
See man setfacl for further reading.

Related

trying to copy an Ansible playbook from my local machine to remote host. But getting bad permissions

I am trying to copy a playbook from my local machine to the host machine (EC2 Instance) but It says I have bad permissions, despite add my key-pair to ~/.ssh/id-rsa/ansible-benchmark.pem.
Ansible-benchmark.pem being the key.
The code I run is scp /Users/mohammedkhot/Documents/terraform-consul/cis-playbook/main.yaml ec2-18-170-61-4.eu-west-2.compute.amazonaws.com:/etc/ansible.
I am trying to copy my main.yaml file to /etc/ansible/
I did also run chmod 400 before trying to copy it but it didn't work.
This is the error I am getting
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0755 for '/Users/mohammedkhot/.ssh/id_rsa' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "/Users/mohammedkhot/.ssh/id_rsa": bad permissions
mohammedkhot#ec2-18-170-61-4.eu-west-2.compute.amazonaws.com: Permission denied (publickey).
lost connection```
The third line in the output is telling you what is wrong. You need more secure permissions on the private key file which resides on your workstation. The current permissions are too permissive.
Change the file permissions to read only for your user using chmod, and then attempt to upload the file to the remote machine.
$ chmod 600 /Users/mohammedkhot/.ssh/id_rsa
$ scp /Users/mohammedkhot/Documents/terraform-consul/cis-playbook/main.yaml ec2-18-170-61-4.eu-west-2.compute.amazonaws.com:/etc/ansible

Ansible run a command who is logged into system

I have my ansible playbook to copy few executables from a location and run them. The playbook is using svcadmin user to perform this.
I need to run these executables on the screen of user1 who is currently logged into the system.
Are there any commands I can use? I googled all over but I can only find run as option. I want something like remote execution.

.ansible/tmp/ansible-tmp-* Permission denied

Remote host throws error while running Ansible playbook despite a user being sudo user.
"/usr/bin/python: can't open file '/home/ludd/.ansible/tmp/ansible-tmp-1466162346.37-16304304631529/zypper'
A fix that worked for me, was to change the path of the ansible's remote_tmp directory, in ansible's configuration file, e.g.
# /etc/ansible/ansible.cfg
remote_tmp = /tmp/${USER}/ansible
Detailed information can be found here.
Note: With ansible v4 (or later) this this variable might look like this ansible_remote_tmp check the docs
Caution:Ansible Configuration Settings can be declared and used in a configuration file which will be searched for in the following order:
ANSIBLE_CONFIG (environment variable if set)
ansible.cfg (in the current directory)
~/.ansible.cfg (in the home directory)
/etc/ansible/ansible.cfg
I had to set variable ansible_remote_tmp rather than remote_tmp in order to make it working.
Changing remote_tmp didn't solve the issue for me. What did solve it, however, was removing --connection=local from the playbook invocation.
How does the file in question get to the host? Do you copy or sync it? If you do, may want do to do
chmod 775 fileName
on the file before you send it to the host.

Can I install (or remove) Ansible role from Galaxy using ansible-pull?

I'm working with Ansible using ansible-pull (runs on cron).
Can I install Ansible role from Ansible Galaxy without login in to all computers (just by adding a command to my Ansible playbook)?
If I understand you correctly, you're trying to download and install roles from Ansible Galaxy from the command line, in a hands-off manner, possibly repeatedly (via cron). If this is the case, here's how you can do it.
# download the roles
ansible-galaxy install --ignore-errors f500.elasticsearch groover.packerio
# run ansible-playbook to install the roles downloaded from Ansible Galaxy
ansible-playbook -i localhost, -c local <(echo -e '- hosts: localhost\n roles:\n - { role: f500.elasticsearch, elasticsearch_cluster_name: "my elasticsearch cluster" }\n - { role: groover.packerio, packerio_version: 0.6.1 }\n')
Explanation / FYI:
To download roles from Ansible Galaxy, use ansible-galaxy, not ansible-pull. For details, see the manual. You can download multiple roles at once.
If the role had been downloaded previously, repeated attempts at downloading using ansible-galaxy install will result in an error. If you wish to call this command repeatedly (e.g. from cron), use --ignore-errors (skip the role and move on to the next item) or --force (force overwrite) to work around this.
When running ansible-playbook, we can avoid having to create an inventory file using -i localhost, (the comma at the end signals that we're providing a list, not a file).
-c local (same as --connection=local) means that we won't be connecting remotely but will execute commands on the localhost.
<() functionality is process substitution. The output of the command appears as a file, so we can feed a "virtual playbook file" into the ansible-playbook command without saving the playbook to the disk first (e.g., playbookname.yml).
As shown, it's possible to embed role variables, such as packerio_version: 0.6.1 and apply multiple roles in a single command.
Note that whitespace is significant in playbooks (they are YAML files). Just as in Python code, be careful about indentation. It's easy to make typos in long lines with echo -e and \n (newlines).
You can run updates of roles from Ansible Galaxy and ansible-playbook separately.
With a bit of magic, you don't have to create inventory files or playbooks (this can be useful sometimes). The solution to install Galaxy roles remotely via push is less hacky / cleaner but if you prefer to use cron and pulling then this can help.
I usually add roles from galaxy as submodules in my own repository; that way I have control over when I update them, and ansible-pull will automatically fetch them - removing the need to run ansible-galaxy.
E.g.:
mkdir roles
git submodule add https://github.com/groover/ansible-role-packerio roles/groover.packerio
Yes you can.
# install Ansible Galaxy requirements via the pull playbook itself
- hosts: localhost
tasks:
- command: ansible-galaxy install -r requirements.yml

running ansible playbook against unavailable hosts (down/offline)

May be missing something obvious but ansible play books (which work great for a network of machines that are ssh connected) don't have a mechanism to track which play books have been run against which servers and then re-run when then node pops up/checks in? The playbook works fine but if it is executed when some of the machines are down/offline then those hosts miss those changes…I'm sure the solution can't be to run all the playbook again and again.
Maybe its about googling correct terms…if someone understands the question, please help with what should be searched for since this must be a common requirement…is this called automatic provisioning (just a guess)?
Looking for an ansible speciic way since I like 2 things about it (Python and SSH based…no additional client deployment required)
There is an inbuilt way to do this. By using retry concept we can accomplish retrying on failed hosts.
Step1: Check if your ansible.cfg file contain
retry files
retry_files_enabled = True
retry_files_save_path = ~
Step2: when you run your ansible-playbook with all required hosts, it will create a .retry file with playbook name.
Suppose if you execute below command
ansible-playbook update_network.yml -e group=rollout1
It will create a retry file in your home directory with hosts which are failed.
Step3: Once you run for the first time, just run ansible-playbook in a loop format as below with a while loop or a crontab
while true
do
ansible-playbook update_network.yml -i ~/update_network.retry
done
This automatically run until you have hosts exhausted in ~/update_network.retry file.
Often the solution is indeed to run the playbook again--there are lots of ways to write playbooks that ensure you can run the playbooks over and over again without harmful effects. For ongoing configuration remediation like this, some people choose to just run playbooks using cron.
AnsibleWorks AWX has a method to do an on-boot or on-provision checkin that triggers a playbook run automatically. That may be more what you're asking for here:
http://www.ansibleworks.com/ansibleworks-awx

Resources