ansible - tranferring files and changing ownership - ansible

I'm new to Ansible. The following is my requirement,
Transfer files(.tar.gz) from one host to many machines (38+ Nodes) under /tmp as user1
Log in to each machines as user2 and switch to root user using sudo su - (With Password)
extract it to another directory (/opt/monitor)
Change a configuration in the file (/opt/monitor/etc/config -> host= )
Start the process under /opt/monitor/init.d
For this, should I use playbooks or ad-hoc commands ?
I'll happy to use ad-hoc mode in ansible as I'm afraid of Playbooks.
Thanks in advance

You’d have to write several ad hoc commands to accomplish this. I don’t see any good reason to not use a playbook here. You will want to learn about playbooks, but it’s not much more to learn than the ad hoc commands. The sudo parts are taken care of for you by using the -b option to “become” the using sudo. Ansible takes care of the logging in for you via ssh.
The actions you’ll want to make use of are common for this type of setup where you’re installing something from source, commands like yum, get_url, unarchive, service. As an example, here’s a pretty similar process to what you need, demonstrating installing redis from source on a RedHat-family system:
- name: install yum dependencies for redis
yum: name=jemalloc-devel ... state=present
- name: get redis from file server
get_url: url={{s3uri}}/common/{{redis}}.tar.gz dest={{tmp}}
- name: extract redis
unarchive: copy=no src={{tmp}}/{{redis}}.tar.gz dest={{tmp}} creates={{tmp}}/{{redis}}
- name: build redis
command: chdir={{tmp}}/{{redis}} creates=/usr/local/bin/redis-server make install
- name: copy custom systemd redis.service
copy: src=myredis.service dest=/usr/lib/systemd/system/
# and logrotate, redis.conf, etc
- name: enable myredis service
service: name=myredis state=started enabled=yes
You could define custom variables like tmp and redis in a global_vars/all.yaml file. You’ll also want a site.yaml file to define your hosts and a role(s).
You’d invoke the playbook with something like:
ansible-playbook site.yaml -b --ask-become-pass -v
This can operate on your 38+ nodes as easily as on one.

You'll want a playbook to do this. At the simplest level, since you mention unpacking, it might look something like this:
- name: copy & unpack the file
unarchive: src=/path/to/file/on/local/host
dest=/path/to/target
copy=yes
- name: copy custom config
copy: src=/path/to/src/file
dest=/path/to/target
- name: Enable service
service: name=foo enabled=yes state=started

Related

Question about how to use 1password within playbooks

Im extremely new to Ansible, so please excuse all of the mistakes Im about to make in this post. I have a handful of linux servers at work and I want to use ansible to update them regularly. We use 1password, and I have the 1password CLI installed and working on the server I have ansible installed on. I can successfully pull passwords with this test playbook:
- hosts: localhost
tasks:
- name:
debug:
var: lookup("onepassword", "linuxserver1_localadmin")
Im running into a wall trying to figure out how to use 1password within a playbook to specify which password to use when connecting to a server. All of the servers will use the same username, but each has a different password. I know I can put ansible_password=xxxxx in vars, but thats plain text so obviously I cant do that. So within the host file right now I have:
[linuxserver1]
10.x.x.x
[linuxserver1:vars]
ansible_user=linuxserver1_localadmin
[linuxserver2]
10.x.x.x
[linuxserver2:vars]
ansible_user=linuxserver2_localadmin
My goal is to run a very simple playbook like this (pseudo-yaml):
---
- hosts: linuxserver1
tasks:
- name: run updates
vars:
- password: lookup("onepassword", "linuxserver1_localadmin")
command: yum update -y
- hosts: linuxserver2
tasks:
- name: run updates
vars:
- password: lookup("onepassword", "linuxserver2_localadmin")
command: yum update -y
Eventually in the hosts file I will have linuxserver3/4/5 etc. Is there a way to specify the password with 1pass in the hosts file, or is it done in the playbook like Im imagining in the pseudo-code?
Thanks for any and all help!
I can get this working with plain text passwords in the hosts file, which I dont want to use. I dont know enough about yml to even attempt to structure this.

How can I ensure correct permissions when using Ansible to deploy an application

I am having a really hard time understanding how to approach privilege escalation when deploying a python application using Ansible.
In my current setup:
The VM is provisioned by our sysadmins
My user (let's call it "John") is not root, but can become it via calling sudo.
No matter what I do via Ansible, it ends being either done via John or root (with become: true).
I would like to:
install system packages like python3, pip or supervisord
setup separate system user
download the code using the git repository
create virtualenv
install requirements in the virtualenv
setup supervisord
The list above is really easy when I do it manually - I log into the VM, I use sudo su - to become superuser to handle points 1-2, then su new-system-user, complete points 3-5 etc.
Some of the tasks are also really easy to do in Ansible when using become: true or using my SSH user, however I get many permission issues when using the newly created system user, and become_user: "{{ system_user }}" in the tasks 3-5 seem not to work as intended.
I would like to ask you - what is the optimal way to tackle this issue? My only workaround is to do everything in the context of my SSH user, and then copy&chown to the new system-user, but this seems like a hack, and I'm pretty sure I'm missing something when it comes to the correct privilege escalation.
Even if I can't answer
What is the optimal way to tackle this issue?
fully, steps 1-3 and 6 might be possible just in a way
---
- hosts: pyapp
become: true
tasks:
- name: Make sure packages are installed
yum:
name: supervisor
state: latest
- name: Create group in local system
group:
name: pyapp
gid: '1234'
- name: Configure local account in system
user:
name: "pyapp"
system: yes
createhome: yes
uid: '1234'
group: '1234'
shell: /sbin/nologin
comment: "My Python App"
state: present
- name: Download and unpack
unarchive:
src: "https://{{ DOWNLOAD_URL }}/myPythonApp.tar.gz"
dest: "/home/pyapp/"
remote_src: yes
owner: "pyapp"
group: "pyapp"
- name: Make sure log directory exists
file:
path: "/var/log/pyapp"
state: directory
owner: 'pyapp'
group: 'pyapp'
- name: Make sure supervisord config file is provided
copy:
src: "pyapp.ini"
dest: "/etc/supervisord.d/pyapp.ini"
- name: Make sure service becomes started and enabled
systemd:
name: supervisord
state: started
enabled: true
daemon_reload: true
- name: Make sure application is in started state
supervisorctl:
name: pyapp
state: started
with a Configuration File (pyapp.ini) like
[program:pyapp]
directory=/home/pyapp
command=/usr/bin/python /home/pyapp/myPythonApp.py
user=pyapp
process_name=%(program_name)s
autorestart=true
stderr_logfile=/var/log/pyapp/stderr.log
stdout_logfile=/var/log/pyapp/stdout.log
leaving other values in default.
I am using such approach and it just works. Step 3, the download should also be possible by using the git module.
Module Documentaion in order of occurence
Understanding privilege escalation: become
group – Add or remove groups
user – Manage user accounts
unarchive – Unpacks an archive after (optionally) copying it from the local machine
file – Manage files and file properties
copy – Copy files to remote locations
systemd – Manage systemd units
supervisorctl – Manage the state of a program or group of programs running via supervisord
Further Readings
You may than have a look into Manages Python library dependencies and the Examples. As well Installing a Distribution Package and Running Supervisor.

How can I run an ansible role locally?

I want to build a docker image locally and deploy it so it can then be pulled on the remote server I'm deploying to. To do this I first need to check out code from git to be built.
I have an existing role which installs git, sets up keys for reading from our repo etc. I want to run this role locally to check out the code I care about.
I looked at local action, delegate_to, etc but haven't figured out an easy way to do this. The best approach I could find was:
- name: check out project from git
delegate_to: localhost
include_role:
name: configure_git
However, this doesn't work I get a complaint that there is a syntax error on the name line. If I remove the delegate_to line it works (but runs on the wrong server). If I replace include_role with debug it will run locally. It's almost as if ansible explicitly refuses to run an included role locally, not that I can find that anywhere in the documentation.
Is there a clean way to run this, or other roles, locally?
Extract from the include_role module documentation
Task-level keywords, loops, and conditionals apply only to the include_role statement itself.
To apply keywords to the tasks within the role, pass them using the apply option or use ansible.builtin.import_role instead.
Ignores some keywords, like until and retries.
I actually don't know if the error you get is linked to delegate_to being ignored (I seriously doubt it is the case...). Meanwhile it's not the correct way to use it here:
- name: check out project from git
include_role:
name: configure_git
apply:
delegate_to: localhost
Moreover, this is most probably a bad idea. Let's imagine your play targets 100 servers: the role will run one hundred time (unless you also apply run_once: true). I would run my role "normally" on localhost in a dedicated play then do the rest of the job on my targets in the next one(s).
- name: Prepare env on localhost
hosts: localhost
roles:
- role: configure_git
- name: Do the rest on other hosts
hosts: my_group
tasks:
- name: dummy.
debug:
msg: "Dummy"

Ansible playbook check if service is up if not - install something

i need to install Symantec endpoint security on my linux system and im trying to write a playbook to do so
when i want to install the program i use ./install.sh -i
but after the installation when i run the installation again i get this msg:
root#TestKubuntu:/usr/SEP# ./install.sh -i
Starting to install Symantec Endpoint Protection for Linux
Downgrade is not supported. Please make sure the target version is newer than the original one.
this is how i install it in the playbook
- name: Install_SEP
command: bash /usr/SEP/install.sh -i
I would like if it's possible to maybe check if the service is up and if there is no service then install it or maybe there is a better way doing this.
Thank you very much for your time
Q: "I would like to check if the service is up and if there is no service then install it."
It's possible to use service_facts. For example to check a service is running
vars:
my_service: "<name-of-my-service>"
tasks:
- name: Collect service facts
service_facts:
- name: Install service when not running
command: "<install-service>"
when: "my_service not in ansible_facts.services|
dict2items|
json_query('[?value.state == `running`].key')"
To check a service installed use
json_query('[].key') }}"
(not tested)
Please try something like below.
- name: Check if service is up
command: <command to check if service is up>
register: output
- name: Install_SEP
command: bash /usr/SEP/install.sh -i
when: "'running' not in output.stdout"
Note: I have used running in when condition : If the service command returns something specific, include that instead of running.

How to apply proxy and DNS settings to GNU/Linux Debian using configuration management tool such as Ansible

I'm new to configuration management tool.
I want to use Ansible.
I'd like to set proxy to several GNU/Linux Debian (in fact several Raspbian).
I'd like to append
export http_proxy=http://cache.domain.com:3128
to /home/pi/.bashrc
I also want to append
Acquire::http::Proxy "http://cache.domain.com:3128";
to /etc/apt.conf
I want to set DNS to IP X1.X2.X3.X4 creating a
/etc/resol.conf file with
nameserver X1.X2.X3.X4
What playbook file should I write ? How should I apply this playbook to my servers ?
Start by learning a bit about Ansible basics and familiarize yourself with playbooks. Basically you ensure you can SSH in to your Raspian machines (using keys) and that the user Ansible invokes on these machines can run sudo. (That's the hard bit.)
The easy bit is creating the playbook for the tasks at hand, and there are plenty of pointers to example playbooks in the documentation.
If you really want to add a line to a file or two, use the lineinfile module, although I strongly recommend you create templates for the files you want to push to your machines and use those with the template module. (lineinfile can get quite messy.)
I second jpmens. This is a very basic problem in Ansible, and a very good way to get started using the docs, tutorials and example playbooks.
However, if you're stuck or in a hurry, you can solve this like this (everything takes place on the "ansible master") :
Create a roles structure like this :
cd your_playbooks_directory
mkdir -p roles/pi/{templates,tasks,vars}
Now create roles/pi/tasks/main.yml :
- name: Adds resolv.conf
template: src=resolv.conf.j2 dest=/etc/resolv.conf mode=0644
- name: Adds proxy env setting to pi user
lineinfile: dest=~pi/.bashrc regexp="^export http_proxy" insertafter=EOF line="export http_proxy={{ http_proxy }}"
Then roles/pi/templates/resolv.conf.j2 :
nameserver {{ dns_server }}
then roles/pi/vars/main.yml :
dns_server: 8.8.8.8
http_proxy: http://cache.domain.com:3128
Now make a top-level playbook to apply roles, at your playbook root, and call it site.yml :
- hosts : raspberries
roles:
- { role: pi }
You can apply your playbook using :
ansible-playbook site.yml
assuming your machines are in the raspberries group.
Good luck.

Resources