How can I install aptitude package without ansible warning:
TASK [... : APT: Install aptitude package] ********************************************************************************
[WARNING]: Could not find aptitude. Using apt-get instead
my install code looks like:
- name: "APT: Install aptitude package"
apt:
name: aptitude
# vars:
# ACTION_WARNINGS: false << DOES NOT WORK
Fixed (specifically) for the aptitude install:
- name: "APT: Install aptitude package"
apt:
name: aptitude
force_apt_get: yes
based on https://github.com/ansible/ansible/blob/stable-2.8/lib/ansible/modules/packaging/os/apt.py#L1059
Or use module_defaults in your playbook, if you do not want to pollute your system with aptitude:
---
- hosts: ...
module_defaults:
apt:
force_apt_get: yes
tasks:
- ...
Related
I tried a simple install/uninstall ansible playbook with dropbear but not able to remove the module by setting apt state to absent.
---
# filename: install.yaml
- hosts: all
become: yes
tasks:
- name: install dropbear
tags: dropbear
apt:
name: dropbear
---
# filename: uninstall.yaml
- hosts: all
become: yes
tasks:
- name: uninstall dropbear
tags: dropbear
apt:
name: dropbear
state: absent
When running the uninstall.yaml ansible playbook, it prints out that the task is OK and state has been changed. I ssh into the target server but the dropbear command still exist.
Finally get it work! Thanks to #zeitounator's hint.
Adding autoremove: yes still not work, but after manually uninstall dropbear with apt-get remove dropbear. I found there are dependencies. I tried using a wildcard with name: dropbear*, then the dropbear is removed.
---
# uninstall.yaml
- hosts: all
become: yes
tasks:
- name: uninstall dropbear
tags: dropbear
apt:
name: dropbear*
state: absent
autoremove: yes
purge: yes
I think this method might work for other packages with dependencies not able to be removed by ansible apt module using autoremove, too.
Still don't know why the autoremove not work. It should be used for the case to remove denepencies(weired).
I did not dig into why this happens, but you will get the exact same behavior if you simply install the package manually and run a simple removal with apt remove dropbear. The dropbear command will still be there until you apt autoremove the dependent packages that where installed as well.
So the correct way to uninstall this particular package is:
- hosts: all
become: yes
tasks:
- name: uninstall dropbear
tags: dropbear
apt:
name: dropbear
state: absent
purge: true
autoremove: true
Note that the purge might not be necessary for your particular problem but ensures that any trace of the package and its dependencies (e.g. config files...) are gone.
See the apt module documentation for more information.
I would like to install the base CentOS to a specify directory, example to /var/centos. The directory is empty. I usage the next command in yum:
yum --disablerepo='*' --enablerepo='base,updates' --releasever=7 --installroot=/var/centos groupinstall #core
This command installs the base system without any problems. I want to do the same through Ansible.
In Ansible I usage the next task:
- name: Install base CentOS
yum:
state: latest
name: "#core"
installroot: "/var/centos/"
disablerepo: "*"
enablerepo: "base,updates"
releasever: "7"
When executing a playbook I receive an error:
Cannot find a valid baseurl for repo: base/$releasever/x86_64
As I understand, the yum don't see any repo.
I make list:
- name: Install base CentOS
yum:
list: repos
installroot: "/var/centos/"
When I execute the task I receive nothing.
Why doesn't yum module see repos?
Updated.
Before performing the Install base task, I executed the command - rpm --root /var/centos --force --nodeps -ivh http://mirror.centos.org/centos/7/os/x86_64/Packages/centos-release-7-6.1810.2.el7.centos.x86_64.rpm, after that everything worked.
The final playbook:
vars:
root_path: "/var/centos/"
tasks:
- name: Install package centos-release-7-6.1810.2.el7.centos.x86_64.rpm
shell: rpm --root {{ root_path }} --force --nodeps -ivh http://mirror.centos.org/centos/7/os/x86_64/Packages/centos-release-7-6.1810.2.el7.centos.x86_64.rpm
- name: Install base CentOS
yum:
state: installed
name:
- "#core"
installroot: "{{ root_path }}"
disablerepo: "*"
enablerepo: "base,updates"
releasever: "7"
I am trying to translate a shell script into Ansible.
Snippet of code that is confusing me:
sudo apt-get update
sudo ACCEPT_EULA=Y apt-get install msodbcsql mssql-tools
sudo apt-get install unixodbc-dev
What I have so far:
- name: Install SQL Server prerequisites
apt: name={{item}} state=present
update_cache: yes
with_items:
- msodbcsql
- mssql-tools
- unixodbc-dev
No idea where to tie in ACCEPT_EULA=Y.
This is an environment variable, so:
- name: Install SQL Server prerequisites
apt:
name: "{{item}}"
state: present
update_cache: yes
with_items:
- msodbcsql
- mssql-tools
- unixodbc-dev
environment:
ACCEPT_EULA: Y
And mind the indentation. It's really important in YAML.
I have a rather long task in my Ansible playbook that installs various packages using APT. This tasks takes a very long time on my laptop. Is there any way to get Ansible to "echo" which item it's installing as it iterates through the packages so I can get an idea of how much longer this task is going to take?
- name: install global packages
apt: pkg={{ item }} update_cache=yes cache_valid_time=3600
become: True
become_user: root
with_items:
- git
- vim
- bash-completion
- bash-doc
- wput
- tree
- colordiff
- libjpeg62-turbo-dev
- libopenjpeg-dev
- zlib1g-dev
- libwebp-dev
- libffi-dev
- libncurses5-dev
- python-setuptools
- python-dev
- python-doc
- python-pip
- virtualenv
- virtualenvwrapper
- python-psycopg2
- postgresql-9.4
- postgresql-server-dev-9.4
- postgresql-contrib
- postgresql-doc-9.4
- postgresql-client
- postgresql-contrib-9.4
- postgresql-9.4-postgis-2.1
- postgis-doc
- postgis
- nginx
- supervisor
- redis-server
In general, Ansible actually does exactly that. It would output every item separately. Which under the hood means: It builds a python package, uploads it to the host(s) and executes it - for every item.
The apt and yum modules have been optimized for loops. Instead of looping over every item, Ansible builds a package that installs all loop items in one go.
Your command translates to something like this:
apt-get -y install git vim bash-completion bash-doc wput ...
So in this case, no, there is no way to output the separate steps to see where Ansible is. Because there are no separate steps.
The docs for the apt module is missing the note which is available in the yum module page:
When used with a loop of package names in a playbook, ansible optimizes the call to the yum module. Instead of calling the module with a single package each time through the loop, ansible calls the module once with all of the package names from the loop.
When you work with remote machines, this is actually a preferable behavior. This speeds up the play by a lot. If you run your playbook locally, of course there is not much benefit.
A simple workaround would be to simply not use the apt module but run a shell command.
- name: install global packages
shell: apt-get -y install {{ item }}
become: True
Find your ansible install directory using:
> python -c 'import ansible; print ansible.__file__'
/usr/local/lib/python2.7/dist-packages/ansible/__init__.pyc
Backup the <ansible_install>/runner/__init__.py file
/usr/local/lib/python2.7/dist-packages/ansible/runner/__init__.py
Edit the file, search for 'apt' and remove 'apt' from the list.
if len(items) and utils.is_list_of_strings(items) and self.module_name in [ 'apt', 'yum', 'pkgng' ]:
to
if len(items) and utils.is_list_of_strings(items) and self.module_name in [ 'yum', 'pkgng' ]:
Thats it!
When I want a bit more feedback on the playbook run I'll group packages together by their purpose. The cache_valid_time allows you to do this without the normal penalty of updating the repo cache each time. I find this improves readability and documentation as well.
- name: install global packages
apt: pkg={{ item }} update_cache=yes cache_valid_time=3600
become: True
with_items:
- git
- vim
- bash-completion
- bash-doc
- wput
- tree
- colordiff
- libjpeg62-turbo-dev
- libopenjpeg-dev
- zlib1g-dev
- libwebp-dev
- libffi-dev
- libncurses5-dev
- name: install python and friends
apt: pkg={{ item }} update_cache=yes cache_valid_time=3600
become: True
with_items:
- python-setuptools
- python-dev
- python-doc
- python-pip
- virtualenv
- virtualenvwrapper
- python-psycopg2
- name: install postgresql
apt: pkg={{ item }} update_cache=yes cache_valid_time=3600
become: True
with_items:
- postgresql-9.4
- postgresql-server-dev-9.4
- postgresql-contrib
- postgresql-doc-9.4
- postgresql-client
- postgresql-contrib-9.4
- postgresql-9.4-postgis-2.1
- name: install postgis
apt: pkg={{ item }} update_cache=yes cache_valid_time=3600
become: True
with_items:
- postgis-doc
- postgis
- name: install nginx
apt: pkg=nginx update_cache=yes cache_valid_time=3600
become: True
- name: install supervisor
apt: pkg=supervisor update_cache=yes cache_valid_time=3600
become: True
- name: install redis
apt: pkg=redis-server update_cache=yes cache_valid_time=3600
become: True
I am running this ansible playbook:
---
- hosts: localhost
remote_user: root
tasks:
- name : update system
apt : update_cache=yes
- name : install m4
apt : name=m4 state=present
- name : install build-essential
apt : name=build-essential state=present
- name : install gcc
apt : name=gcc state=present
- name : install gfortran
apt : name=gfortran state=present
- name : install libssl-dev
apt : name=libssl-dev state=present
- name : install python-software-properties
apt : name=python-software-properties state=present
- name : add sage ppa repo
apt_repository: repo='ppa:aims/sagemath'
- name : update system
apt : update_cache=yes
- name : install dvipng
apt : name=dvipng state=present
- name : install sage binary
apt : name=sagemath-upstream-binary state=present
- name : invoke create_sagenb script
command: /usr/bin/screen -d -m sudo /root/databases-and-datamining-iiith/python-scripts/create_sagenb -i -y
- name : invoke start_sage script
command: /usr/bin/screen -d -m sudo /root/databases-and-datamining-iiith/python-scripts/start_sage -i -y
This playbook fails during task "install build-essential" and stops with error asking to run dpkg --configure -a.
How can I make sure that the playbook runs again after facing this error by running the command
dpkg --configure -a
first and then continue with other tasks.
Ansible in general is idempotent. That means you can simply run your playbook again after resolving the issue without conflicts.
This is not always true. In case you have a more complex play and execute tasks depending on the result of another task, this can break easily and a failed task then would bring you into a state that is not so easy to be fixed with Ansible. But that is not the case with the tasks you provided.
If you want to speed things up and skip all the tasks and/or hosts that did not fail, you can work with --limit and/or --start-at-task:
When the playbook fails, you might notice Ansible shows a message including a command which will enable you to limit the play to hosts which failed. So if only 1 host failed you do not need to run the playbook on all hosts:
ansible-playbook ... --limit #/Users/your-username/name-of-playbook.retry
To start at a specific task, you can use --start-at-task. So if your playbook failed at the task "install build-essential" you can start again at right this task and skip all previous tasks:
ansible-playbook ... --start-at-task="install build-essential"
On a side note, the apt module is optimized to work with loops. You can speed up your play by combining the tasks into one single apt task:
tasks:
- name: Install packages that we need for need for apt_repository
apt: update_cache=yes
name={{ item }}
state=present
cache_valid_time=3600
with_items:
- python-software-properties
- python-software-properties-common
- name: add sage ppa repo
apt_repository: repo='ppa:aims/sagemath'
- name: Install packages
apt: update_cache=yes
cache_valid_time=3600
name={{ item }}
state=present
with_items:
- m4
- build-essential
- gcc
- gfortran
- libssl-dev
- dvipng
- sagemath-upstream-binary