When running vagrant up I get the following error:
RUNNING HANDLER [mariadb : restart mysql server] *******************************
DEBUG subprocess: stdout: changed: [default]
INFO interface: detail: changed: [default]
changed: [default]
DEBUG subprocess: stdout:
PLAY RECAP *********************************************************************
INFO interface: detail:
PLAY RECAP *********************************************************************
PLAY RECAP *********************************************************************
DEBUG subprocess: stdout: default : ok=120 changed=83 unreachable=0 failed=1 skipped=32 rescued=0 ignored=0
INFO interface: detail: default : ok=120 changed=83 unreachable=0 failed=1 skipped=32 rescued=0 ignored=0
default : ok=120 changed=83 unreachable=0 failed=1 skipped=32 rescued=0 ignored=0
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 31722
DEBUG subprocess: Exit status: 2
ERROR warden: Error occurred: Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
Maybe this part of the output is also of interest:
ERROR vagrant: /opt/vagrant/embedded/gems/2.2.6/gems/vagrant-2.2.6/plugins/provisioners/ansible/provisioner/host.rb:104:in `execute_command_from_host'
/opt/vagrant/embedded/gems/2.2.6/gems/vagrant-2.2.6/plugins/provisioners/ansible/provisioner/host.rb:179:in `execute_ansible_playbook_from_host'
My version of Ansible is 2.9.2 (the newest I guess). There is some large Vagrant file I do not know which part is causing the error. How can I debug this?
Related
My molecule invocation looks like this:
molecule test -s default 2>&1 test.log
Unfortunately, the command doesn't test the scenario:
INFO default scenario test matrix: dependency, lint, cleanup, destroy, syntax, create, prepare, converge, idempotence, side_effect, verify, cleanup, destroy
INFO Performing prerun...
INFO Added ANSIBLE_LIBRARY=/Users/oschlueter/.cache/ansible-compat/e0666c/modules:/Users/oschlueter/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
INFO Added ANSIBLE_COLLECTIONS_PATH=/Users/oschlueter/.cache/ansible-compat/e0666c/collections:/Users/oschlueter/.ansible/collections:/usr/share/ansible/collections
INFO Added ANSIBLE_ROLES_PATH=/Users/oschlueter/.cache/ansible-compat/e0666c/roles:/Users/oschlueter/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
INFO Using /Users/oschlueter/.ansible/roles/company.python symlink to current repository in order to enable Ansible to find the role using its expected full name.
INFO Running default > dependency
WARNING Skipping, missing the requirements file.
WARNING Skipping, missing the requirements file.
INFO Running default > lint
COMMAND: set -e
yamllint .
ansible-lint
Loading custom .yamllint config file, this extends our internal yamllint config.
INFO Running default > cleanup
WARNING Skipping, cleanup playbook not configured.
INFO Running default > destroy
INFO Sanity checks: 'docker'
PLAY [Destroy] *****************************************************************
TASK [Destroy molecule instance(s)] ********************************************
changed: [localhost] => (item=instance)
TASK [Wait for instance(s) deletion to complete] *******************************
FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left).
ok: [localhost] => (item=instance)
TASK [Delete docker networks(s)] ***********************************************
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
INFO Running default > syntax
ERROR! the playbook: test.log could not be found
CRITICAL Ansible return code was 1, command was: ['ansible-playbook', '--inventory', '/Users/oschlueter/.cache/molecule/python-venv/default/inventory', '--skip-tags', 'molecule-notest,notest', '--syntax-check', 'test.log', '/Users/oschlueter/git/ansible/roles/python-venv/molecule/default/converge.yml']
WARNING An error occurred during the test sequence action: 'syntax'. Cleaning up.
INFO Running default > cleanup
WARNING Skipping, cleanup playbook not configured.
INFO Running default > destroy
PLAY [Destroy] *****************************************************************
TASK [Destroy molecule instance(s)] ********************************************
changed: [localhost] => (item=instance)
TASK [Wait for instance(s) deletion to complete] *******************************
FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left).
ok: [localhost] => (item=instance)
TASK [Delete docker networks(s)] ***********************************************
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
It seems that molecule includes the target of my redirect into its ansible invocation for some reason.
I am writing an Ansible playbook where I have 3 tomcat nodes. Now my script does lot of things like copying the release from nexus to application nodes, deploying it, starting tomcats etc.
What i want to achieve is to do this sequentially. For example the playbook should run for one host and when it gets completed, it should start for another host. My inventory looks like below and I am using group_vars as I have multiple environments like prod, preprod etc.
Can someone help.
[webserver]
tomcat1
tomcat2
tomcat3
Q: "The playbook should run for one host and when it gets completed, it should start for another host."
A: Use serial. For example
shell> cat playbook.yml
- hosts: webserver
gather_facts: false
serial: 1
tasks:
- debug:
var: inventory_hostname
shell> ansible-playbook playbook.yml
PLAY [webserver] ***
TASK [debug] ***
ok: [tomcat1] => {
"inventory_hostname": "tomcat1"
}
PLAY [webserver] ***
TASK [debug] ***
ok: [tomcat2] => {
"inventory_hostname": "tomcat2"
}
PLAY [webserver] ***
TASK [debug] ***
ok: [tomcat3] => {
"inventory_hostname": "tomcat3"
}
PLAY RECAP ***
tomcat1: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
tomcat2: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
tomcat3: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
See How to build your inventory
shell> cat hosts
[webserver]
tomcat1
tomcat2
tomcat3
When i try to install a list of packages('apache2', 'libapache2-mod-wsgi', 'python-pip', 'python-virtualenv') on on a group of hosts (app01 and app02),
it fails randomly either on app01 or app02.
After a first few retries the required packages are already installed on the host. Subsequent execution continue to fail randomly on app01 or app02 instead of returning a simple success.
The ansible controller and hosts are all running ubuntu 16.04
The controller has ansible 2.8.4
I have already set "become: yes" in playbook.
I have already tried running apt-get clean and apt-get update. on the hosts
Inventory file
[webserver]
app01 ansible_python_interpreter=python3
app02 ansible_python_interpreter=python3
Playbook
---
- hosts: webserver
tasks:
- name: install web components
become: yes
apt:
name: ['apache2', 'libapache2-mod-wsgi', 'python-pip', 'python-virtualenv']
state: present
update_cache: yes
In the first run , the execution fails on host app01
In the second run, the execution fails on host app02
1 st Run
shekhar#control:~/ansible$ ansible-playbook -v playbooks/webservers.yml
Using /home/shekhar/ansible/ansible.cfg as config file
PLAY [webserver] ******************************************************************************
TASK [Gathering Facts] ******************************************************************************
ok: [app01]
ok: [app02]
TASK [install web components] ******************************************************************************
fatal: [app01]: FAILED! => {"changed": false, "msg": "Failed to lock apt for exclusive operation"}
ok: [app02] => {"cache_update_time": 1568203558, "cache_updated": false, "changed": false}
PLAY RECAP ******************************************************************************
app01 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
app02 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
2nd Run
shekhar#control:~/ansible$ ansible-playbook -v playbooks/webservers.yml
Using /home/shekhar/ansible/ansible.cfg as config file
PLAY [webserver] ******************************************************************************
TASK [Gathering Facts] ******************************************************************************
ok: [app01]
ok: [app02]
TASK [install web components] ******************************************************************************
fatal: [app02]: FAILED! => {"changed": false, "msg": "Failed to lock apt for exclusive operation"}
ok: [app01] => {"cache_update_time": 1568203558, "cache_updated": false, "changed": false}
PLAY RECAP ******************************************************************************
app01 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
app02 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The behavior is a bit strangeā¦ one succeed, and other failing each time.
My guess is that your app01 and app02 are actually the same host! If so, it sounds normal that the 2 jobs fight against each other.
Need assistance with ICP install. I have encountered the following error:
TASK [addon : Waiting for calico node service] *********************************
failed: [localhost -> 129.40.227.142] (item=129.40.227.142) => {"elapsed": 600, "failed": true, "item": "129.40.227.142", "msg": "Timeout when waiting for 129.40.227.142:9099"}
PLAY RECAP *********************************************************************
129.40.227.142 : ok=172 changed=66 unreachable=0 failed=0
129.40.227.143 : ok=157 changed=55 unreachable=0 failed=0
129.40.227.144 : ok=116 changed=24 unreachable=0 failed=0
localhost : ok=118 changed=52 unreachable=0 failed=1
Try explicitly setting the calico_ip_autodetection_method: interface to your interface name in your config.yaml file.
Here's the documentation of all things you can do in that file.
https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/installing/config_yaml.html#network_setting.
Kudos to Harry P.
We had a similar issue installing.
The solution was the following command on each machine. (Use Ansible to make your life easier)
sysctl -w net.ipv4.conf.all.rp_filtered=1
make sure to add it to the sysctl.conf file so it persists on restart.
echo "net.ipv4.conf.all.rp_filtered=1" | tee -a /etc/sysctl.conf
Hope this helps others.
As for me I had similar issue, caused by running out of disk space in the /var folder. Check if you have no error messages in the /var/log/containers folder stating some pods could not be started or similar.
The documentation states you will need 40G of space. If you setup docker to place the containers on your data drive, the install will use ~ 500MB in the /opt and ~300MB in the /var.
I'm self-learning ansible and having the files structure below with my customized ansible.cfg and inventory file:
---ansible (Folder) <---Working level
---ansible.cfg (Customized cfg)
---dev (Inventory File)
---playbooks (Folder) <--Issue level
---hostname.yml
ansible.cfg
[defaults]
inventory=./dev
dev
[loadbalancer]
lb01
[webserver]
app01
app02
[database]
db01
[control]
control ansible_connection=localansible
hostname.yml
---
- hosts: all
tasks:
- name: get server hostname
command: hostname
My question is when I run ansible-playbook playbooks/hostname.yml at ansible (Folder) level, everything works fine
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [control]
ok: [app01]
ok: [db01]
ok: [lb01]
ok: [app02]
TASK [get server hostname] *****************************************************
changed: [db01]
changed: [app01]
changed: [app02]
changed: [lb01]
changed: [control]
PLAY RECAP *********************************************************************
app01 : ok=2 changed=1 unreachable=0 failed=0
app02 : ok=2 changed=1 unreachable=0 failed=0
control : ok=2 changed=1 unreachable=0 failed=0
db01 : ok=2 changed=1 unreachable=0 failed=0
lb01 : ok=2 changed=1 unreachable=0 failed=0
Howevery when I go into the playbooks folder and run ansible-playbook hostname.yml it give me warning saying
[WARNING]: provided hosts list is empty, only localhost is available
PLAY RECAP *********************************************************************
Is there anything that prevents playbook from accessing the inventory file?
Ansible does not traverse parent directories in search for the configuration file ansible.cfg. If you want to use a custom configuration file, you need to place it in the current directory and run Ansible executables from there or define an environment variable ANSIBLE_CONFIG pointing to the file.
In your case: changing the directory to ansible and executing is the proper way:
ansible-playbook playbooks/hostname.yml
I don't know where from you got the idea Ansible would check for the configuration file in the parent directory - the documentation is unambiguous:
Changes can be made and used in a configuration file which will be processed in the following order:
ANSIBLE_CONFIG (an environment variable)
ansible.cfg (in the current directory)
.ansible.cfg (in the home directory)
/etc/ansible/ansible.cfg