I'm trying to merge from Salt to Ansible.
What is Ansible equivalent to Salt's top file?
In Ansible you build inventory, put your hosts into groups and then you run playbooks, that will bring your hosts to the desired state (e.g. ensure that software is installed, files are present, etc) on those groups. Note that there is no agent software with Ansible, it uses SSH to do things on remote hosts,
Related
Running ansible playbook on remote servers:
I have 250 linux servers (SUSE linux) where I need to apply patches. Here I need to create an inventory file with all the server names updated. If I want to run an ansible playbook script on all those 250 remote servers, I need to create common users (non-root users) with sudo privilege which is a cumbersome task as I need to connect to all servers and create it. How to achieve this part? Any thoughts on this.
I have an ansible yaml file and mentioned all the hosts in a different file. When I run a playbook in cli, I would like to visualize in which of the hosts, the ansible play is successful and in which of the hosts the play is unsuccessful in a web UI. Are there any tools/apps that can solves this issue.
You are asking for Ansible Tower (paid) or AWX (free). These two are the same thing actually (AWX is an upstream branch of Tower). With both you can run playbooks using web UI and there is some indication if there were failed hosts.
I need to know if it's possible to call / execute ansible playbooks from the target machine. I think i saw a vendor do it or at least something similar. they downloaded a script and it did ran the playbook.
if this is possible how would it be done?
my goal is to run ansible as a centralized server in aws to perform tasks in mulitple environments. most are behind firewalls, any reccomendations/thoughts would be appreciated.
Sure. If your host will install Ansible on target and feed it with all the playbooks the you can run it as any other executable. Should you do that is another story but technically there's no obstacle.
You can run ansible and ansible playbook as you would any other binary on the target's $PATH, so any tool that facilitates running remote commands would work.
Because you are in AWS, one way might be to use AWS System's Manager.
If you wanted to use Ansible itself to do this you could use the shell or command modules:
- hosts: target
become: false
gather_facts: false
tasks:
- name: ansible in ansible
command: ansible --version
- name: ansible-playbook in ansible
command: ansible-playbook --version
Though, as with any situation where you reach for the shell or command modules, you have to be vigilant to maintain playbook idempotency yourself.
If you're requirement is just the ability to execute Ansible commands remotely, you might look into AWX which is the upstream project for Red Hat's Ansible Tower. It wraps ansible in a nice user interface to allow you to trigger Ansible playbooks remotely and with nice-to-haves like RBAC.
If you're ok with executing tasks remotely over ssh take a look at Sparrowdo it has out of the box facilities to run bash scripts ( read ansible executable ) remotely from one master host to another. Or you can even use it to install all the ansible dependencies or whatever you need to do for your scope.
I have a project with 3 environment. Development(Vangrant), staging(cloud) and production(cloud). I would like to have all environment variables my playbook uses to be in one file for each environment. It works fine if I have one server per environment as below:
my_proj/
src/
provisioner/
host_vars/
dev-vm.yml
staging/
host_vars/
staging-web.yml
inventory
prod/
host_vars/
prod-web.yml
inventory
playbook.yml
Vagrantfile
This way my Vagrant uses ansible_local to automatically apply the playbook and the playbook uses variables from /my_proj/provisioner/host_vars/dev-vm.yml
Staging/Production can be provisioned from within the Vagrant itself or any other computer with Ansible using the command ansible-playbook provisioner/playbook.yml -i provisioner/staging.
My problem is when I add multiple servers on my staging/production inventory file. I have to duplicate my_proj/provisioner/staging/host_vars/web-staging.yml for each new server. Is there a way to have a "default" env_vars file per environment?
For example, If my staging inventory has 3 servers [staging-web, staging-cli-01, staging-cli-02] I don't wanna to have 3 similar files withing my_proj/provisioner/host_vars/ as staging-web.yml, staging-cli-01.yml, staging-cli-02.yml. Ideally I would just have my_proj/provisioner/host_vars/all.yml
Any idea in how to accomplish it or suggestion of better way to organize the project?
It depends. If I understand your question correctly, and your requirements match the way your question is written (i.e. you only need the group behaviour in staging and production), then you can add a group_vars dir under staging and production and within there, place an all.yml, moving all your vars into this file.
When you use -ito specify your inventory file, the contents of group_vars, like host_vars will be loaded.
You can further refine this approach by placing your hosts into groups within your inventory and replacing all.yml with <group_name>.yml.
Beware however, if you employ this technique for your vagrant hosts (i.e. place a group_vars directory in the same directory as your playbook), you will likely not get the results you expect. This is due to group_vars at the playbook level having higher precedence than those at the inventory level.
See the Ansible Variables docs for more info.
I am a newbie to Ansible. I have managed to write playbooks that set up Apache, Tomcat and others, all on localhost. I am now trying to move this to other servers to test the playbooks.
I have done the following:
1. Added a section [webservers] in /etc/ansible/hosts and put the public IP for that instance there.
2. I invoked ansible-playbook like so:
ANSIBLE_KEEP_REMOTE_FILES=1 ansible-playbook -vvvv -s serverSetup.yml
My questions:
1. Where do I store the public SSH key for the target server?
2. How do I specify which public key to use?
There are a number of other ways it is possible: ansible.cfg, set_fact, environment vars.
ansible.cfg
You can have an Ansible Config file within your project folder which can state which key to use, using the following:
private_key_file = /path/to/key/key1.pem
You can see an example of an ansible.cfg file here: https://raw.githubusercontent.com/ansible/ansible/devel/examples/ansible.cfg
set_fact
You can add the key using the set_fact module within your playbook, this can be hardcoded as below or templated:
- name: Use particular private key for this playbook
set_fact: ansible_private_ssh_key=/path/to/key/key1.pem
http://docs.ansible.com/ansible/set_fact_module.html
environment vars
See this stackoverflow post's answer for more information:
how to define ssh private key for servers fetched by dynamic inventory in files
Where do I store the public SSH key for the target server?
Wherever makes sense. Since these are keys that I may use to directly connect to the machine, I usually store them in ~/.ssh/ with my other private keys. For projects where I'm working on multiple computers or with other users, I store them in Ansible Vault and have a playbook that extracts them and stores them on the local machine.
How do I specify which public key to use?
group_vars is a good place to specify ansible_private_ssh_key.
ansible uses a user to connect to the target machine.
So if your user is ubuntu (-u ubuntu in ansible flags) the key will be ~ubuntu/.ssh/authorized_keys on target machine).
And from the ansible --help command you have
--private-key=PRIVATE_KEY_FILE, --key-file=PRIVATE_KEY_FILE use this file to authenticate the connection