I have a requirement where I have to install AWS SSM Agent on multiple EC2 instances(of different flavors) using Ansible. Can someone please help me? or Suggest me how to achieve this?
I wrote the following script and tried. It is working but, is there a way to achieve this using "Package" module? Because I am afraid my approach might do a reinstall (even if it is already installed). Thanks in advance.
(OR)
Do you think it is better to use Shell commands (https://aws.amazon.com/premiumsupport/knowledge-center/install-ssm-agent-ec2-linux/) in the Ansible script to install the agent and refer to them based on the condition of os flavour?
---
- hosts: all
remote_user: ansible
become: true
tasks:
- name: install SSM if REDHAT
command: "{{ item }}"
loop:
- sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
- sudo systemctl enable amazon-ssm-agent
- sudo systemctl start amazon-ssm-agent
when: ansible_os_family == "RedHat"
- name: install SSM if UBUNTU
command: "{{ item }}"
loop:
- sudo snap install amazon-ssm-agent --classic
- sudo systemctl start snap.amazon-ssm-agent.amazon-ssm-agent.service
when: ansible_os_family == "Debian"
Related
I am trying to upgrade pip using ansible on RockyLinux 8.5, you can refer below code snippet which I used in my yml file.
- name: Upgrade pip
pip:
caution: could change to /bin/pip3 without "-"
executable: "/bin/pip-3"
umask: "0022"
name: "pip"
state: "forcereinstall"
environment:
http_proxy: "{{ proxy }}"
https_proxy: "{{ proxy }}"
become: true
while running it's showing an error and suggests to implement Python with '-m pip' instead of running pip
What exact changes require in my code snippet ?
I need to use with_items loop to install apache2, sqlite3, and git in Ansible. I'm trying to use the below code but it seems like nothing is happening.
---
- hosts: all
sudo: yes
name: install apache2, sqlite3, git on remote server
tasks:
- name: Install list of packages
action: apt pkg={{item}} state=installed
with_items:
- apache2
- sqlite3
- git
you have to place the variable item inside the double quotes...
Try this code it'll work:
---
- name: install apache2, sqlite3, git on remote servers
hosts: all
become: true
tasks:
- name: Install packages
package:
name: "{{item}}"
state: present
loop:
- apache2
- sqlite3
- git
Try
---
- name: install apache2, sqlite3, git on remote servers
hosts: all
sudo: true
tasks:
- name: Install packages
package:
name: {{ item }}
state: present
loop:
- apache2
- sqlite3
- git
See package – Generic OS package manager
"This module actually calls the pertinent package modules for each system (apt, yum, etc)."
See apt – Manages apt-packages if you need apt specific attributes.
I am trying to translate a shell script into Ansible.
Snippet of code that is confusing me:
sudo apt-get update
sudo ACCEPT_EULA=Y apt-get install msodbcsql mssql-tools
sudo apt-get install unixodbc-dev
What I have so far:
- name: Install SQL Server prerequisites
apt: name={{item}} state=present
update_cache: yes
with_items:
- msodbcsql
- mssql-tools
- unixodbc-dev
No idea where to tie in ACCEPT_EULA=Y.
This is an environment variable, so:
- name: Install SQL Server prerequisites
apt:
name: "{{item}}"
state: present
update_cache: yes
with_items:
- msodbcsql
- mssql-tools
- unixodbc-dev
environment:
ACCEPT_EULA: Y
And mind the indentation. It's really important in YAML.
I've set up a box with a user david who has sudo privileges. I can ssh into the box and perform sudo operations like apt-get install. When I try to do the same thing using Ansible's "become privilege escalation", I get a permission denied error. So a simple playbook might look like this:
simple_playbook.yml:
---
- name: Testing...
hosts: all
become: true
become_user: david
become_method: sudo
tasks:
- name: Just want to install sqlite3 for example...
apt: name=sqlite3 state=present
I run this playbook with the following command:
ansible-playbook -i inventory simple_playbook.yml --ask-become-pass
This gives me a prompt for a password, which I give, and I get the following error (abbreviated):
fatal: [123.45.67.89]: FAILED! => {...
failed: E: Could not open lock file /var/lib/dpkg/lock - open (13:
Permission denied)\nE: Unable to lock the administration directory
(/var/lib/dpkg/), are you root?\n", ...}
Why am I getting permission denied?
Additional information
I'm running Ansible 2.1.1.0 and am targeting a Ubuntu 16.04 box. If I use remote_user and sudo options as per Ansible < v1.9, it works fine, like this:
remote_user: david
sudo: yes
Update
The local and remote usernames are the same. To get this working, I just needed to specify become: yes (see #techraf's answer):
Why am I getting permission denied?
Because APT requires root permissions (see the error: are you root?) and you are running the tasks as david.
Per these settings:
become: true
become_user: david
become_method: sudo
Ansible becomes david using sudo method. It basically runs its Python script with sudo david in front.
the user 'david' on the remote box has sudo privileges.
It means david can execute commands (some or all) using sudo-executable to change the effective user for the child process (the command). If no username is given, this process runs as the root account.
Compare the results of these two commands:
$ sudo whoami
root
$ sudo david whoami
david
Back to the APT problem, you (from CLI) as well as Ansible (connecting with SSH using your account) need to run:
sudo apt-get install sqlite3
not:
sudo david apt-get install sqlite3
which will fail with the very exact message Ansible displayed.
The following playbook will escalate by default to the root user:
---
- name: Testing...
hosts: all
become: true
tasks:
- name: Just want to install sqlite3 for example...
apt: name=sqlite3 state=present
remote_user is david. Call the script with --ask-pass and give password for david. If david doesn't have passwordless sudo, then you should also call it with --ask-become-pass.
- name: Testing...
hosts: all
remote_user: david
become: true
become_method: sudo
tasks:
- name: Just want to install sqlite3 for example...
apt: name=sqlite3 state=present
I have the same issue.
The issue is solved by using root and --ask-become-pass --ask-pass command together.
- name: Testing...
hosts: all
remote_user: root
become: true
become_method: sudo
tasks:
- name: Just want to install sqlite3 for example...
apt: name=sqlite3 state=present
ansible-playbook playbook.yml --ask-become-pass --ask-pass
I am running this ansible playbook:
---
- hosts: localhost
remote_user: root
tasks:
- name : update system
apt : update_cache=yes
- name : install m4
apt : name=m4 state=present
- name : install build-essential
apt : name=build-essential state=present
- name : install gcc
apt : name=gcc state=present
- name : install gfortran
apt : name=gfortran state=present
- name : install libssl-dev
apt : name=libssl-dev state=present
- name : install python-software-properties
apt : name=python-software-properties state=present
- name : add sage ppa repo
apt_repository: repo='ppa:aims/sagemath'
- name : update system
apt : update_cache=yes
- name : install dvipng
apt : name=dvipng state=present
- name : install sage binary
apt : name=sagemath-upstream-binary state=present
- name : invoke create_sagenb script
command: /usr/bin/screen -d -m sudo /root/databases-and-datamining-iiith/python-scripts/create_sagenb -i -y
- name : invoke start_sage script
command: /usr/bin/screen -d -m sudo /root/databases-and-datamining-iiith/python-scripts/start_sage -i -y
This playbook fails during task "install build-essential" and stops with error asking to run dpkg --configure -a.
How can I make sure that the playbook runs again after facing this error by running the command
dpkg --configure -a
first and then continue with other tasks.
Ansible in general is idempotent. That means you can simply run your playbook again after resolving the issue without conflicts.
This is not always true. In case you have a more complex play and execute tasks depending on the result of another task, this can break easily and a failed task then would bring you into a state that is not so easy to be fixed with Ansible. But that is not the case with the tasks you provided.
If you want to speed things up and skip all the tasks and/or hosts that did not fail, you can work with --limit and/or --start-at-task:
When the playbook fails, you might notice Ansible shows a message including a command which will enable you to limit the play to hosts which failed. So if only 1 host failed you do not need to run the playbook on all hosts:
ansible-playbook ... --limit #/Users/your-username/name-of-playbook.retry
To start at a specific task, you can use --start-at-task. So if your playbook failed at the task "install build-essential" you can start again at right this task and skip all previous tasks:
ansible-playbook ... --start-at-task="install build-essential"
On a side note, the apt module is optimized to work with loops. You can speed up your play by combining the tasks into one single apt task:
tasks:
- name: Install packages that we need for need for apt_repository
apt: update_cache=yes
name={{ item }}
state=present
cache_valid_time=3600
with_items:
- python-software-properties
- python-software-properties-common
- name: add sage ppa repo
apt_repository: repo='ppa:aims/sagemath'
- name: Install packages
apt: update_cache=yes
cache_valid_time=3600
name={{ item }}
state=present
with_items:
- m4
- build-essential
- gcc
- gfortran
- libssl-dev
- dvipng
- sagemath-upstream-binary