I am getting below error (/bin/sh: nslookup: command not found ) when i am using vagrant for ansible code testing using molecule testig
TASK [Query nslookup own domain] ***********************************************
fatal: [ vagrant01 ]: FAILED! => {"changed": true, "cmd": "nslookup `hostname --fqdn`", "delta": "0:00:00.012528", "end": "2021-10-21 13:42:40.197574", "msg": "non-zero return code", "rc": 127, "start": "2021-10-21 13:42:40.185046", "stderr": "/bin/sh: nslookup: command not found", "stderr_lines": ["/bin/sh: nslookup: command not found"], "stdout": "", "stdout_lines": []}
PLAY RECAP
so it seems nslookup is not installed
You can add a task to install the necessary tools
- name: Install nslookup
apt:
name: dnsutils
state: present
If you are on red-hat distro you need following package
- name: Install nslookup
yum:
name: bind-utils
state: present
or if you want to distribute any bistro you can add both
- name: Install nslookup for ubuntu
apt:
name: dnsutils
state: present
when: ansible_os_family == 'Debian'
- name: Install nslookup for redhat family
yum:
name: bind-utils
state: present
when: ansible_os_family == 'RedHat'
Related
I'm using https://github.com/kubernetes-sigs/kubespray
Is there a way to know exactly which command the following Ansible code will execute on the remote server?
- name: ensure docker packages are installed
package:
name: "{{ docker_package_info.pkgs }}"
state: "{{ docker_package_info.state | default('present') }}"
module_defaults:
apt:
update_cache: true
dnf:
enablerepo: "{{ docker_package_info.enablerepo | default(omit) }}"
yum:
enablerepo: "{{ docker_package_info.enablerepo | default(omit) }}"
zypper:
update_cache: true
register: docker_task_result
until: docker_task_result is succeeded
retries: 4
delay: "{{ retry_stagger | d(3) }}"
notify: restart docker
when:
- not ansible_os_family in ["Flatcar Container Linux by Kinvolk"]
- not is_ostree
- docker_package_info.pkgs|length > 0
I'm getting the following error in RHEL8. I want to run manually docker-ce on the relevant VM and check the error.
{"attempts": 4, "changed": true, "cmd": "set -o pipefail && /usr/bin/docker ps -aq | xargs -r docker rm -fv", "delta": "0:00:00.008934", "end": "2021-07-16 18:46:54.009010", "msg": "non-zero return code", "rc": 127, "start": "2021-07-16 18:46:54.000076", "stderr": "/bin/bash: /usr/bin/docker: No such file or directory", "stderr_lines": ["/bin/bash: /usr/bin/docker: No such file or directory"], "stdout": "", "stdout_lines": []}
Thanks,
I'm getting an error when running the following Ansible task:
- name: List packages to upgrade (1/2)
shell: aptitude -q -F%p --disable-columns search "~U"
register: updates
changed_when: False
when: ansible_os_family == 'Debian'
Error:
TASK [List packages to upgrade (1/2)] ********************************************************************************************************************************
fatal: [php7e]: FAILED! => {"changed": false, "cmd": "aptitude -q -F%p --disable-columns search \"~U\"", "delta": "0:00:00.828791", "end": "2021-06-23 10:31:26.849961", "msg": "non-zero return code", "rc": 1, "start": "2021-06-23 10:31:26.021170", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
Noticed on another server, I don't get this error. The server that doesn't have an issue has aptitude 0.7.4 and the one with the errors has aptitude 0.8.10.
obviously aptitude search returns with an exit-code of 1 if no packages match the search term (in your case: if there are no upgradable packages)
as per the changelog:
aptitude (0.7.6-1) unstable; urgency=low
[...]
* [cmdline] "search" now exits with non-zero on errors or empty results
(Closes: #497299)
so: the observed behaviour is correct and expected.
you must update your ansible task accordingly.
I am trying to execute command /usr/local/bin/airflow initdb in path /home/ec2-user/ in one of EC2 servers.
TASK [../../roles/airflow-config : Airflow | Config | Initialize Airflow Database] ***
fatal: [127.0.0.1]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/airflow initdb", "delta": "0:00:00.004832", "end": "2019-07-23 14:38:22.928975", "msg": "non-zero return code", "rc": 127, "start": "2019-07-23 14:38:22.924143", "stderr": "/bin/bash: /usr/local/bin/airflow: No such file or directory", "stderr_lines": ["/bin/bash: /usr/local/bin/airflow: No such file or directory"], "stdout": "", "stdout_lines": []}
ansible/roles/airflow-config/main.yml file is
- name: Airflow | Config
import_tasks: config.yml
tags:
- config
ansible/roles/airflow-config/config.yml file is
- name: Airflow | Config | Initialize Airflow Database
shell: "/usr/local/bin/airflow initdb"
args:
chdir: "/home/ec2-user"
executable: /bin/bash
become: yes
become_method: sudo
become_user: root
ansible/plays/airflow/configAirflow.yml
---
- hosts: 127.0.0.1
gather_facts: yes
become: yes
vars:
aws_account_id: "12345678910"
roles:
- {role: ../../roles/airflow-config}
Here, i am trying to print the status of the firewall-cmd --state command , but a fatal error is being thrown.
name: Check firewall status
hosts: st
tasks:
- name: Check status of firewall
command: firewall-cmd --state
register: status
- name: Print version
debug:
msg: "Status = {{ status.stdout }}"
State is "not running" in the remote host. But am not getting the result.
I get the following output
fatal: [borexample.com]: FAILED! => {"changed": true, "cmd": ["firewall-cmd", "--state"], "delta": "0:00:00.189023", "end": "2018-09-16 11:40:17.319482", "msg": "non-zero return code", "rc": 252, "start": "2018-09-16 11:40:17.130459", "stderr": "", "stderr_lines": [], "stdout": "\u001b[91mnot running\u001b[00m", "stdout_lines": ["\u001b[91mnot running\u001b[00m"]}
How should i modify the code so that i get only the state ?
I prefer using failed_when: to control your output rc. More info at Ansible Documentation. But you can also use ìgnore_errors: true
Check error codes in the Firewall-cmd Documentation to see which codes adding to your playbook.
In your scenario could be good doing:
- name: Check status of firewall
command: firewall-cmd --state
register: status
failed_when:
- status.rc != 0
- status.rc != 252
Even you can go further and use failed_when: false to avoid the command failing.
The ignore_errors suggested by Baptiste Mille-Mathias would allow you to continue, but then you would like to "debug" {{ status.stderr }}, as in that ase stdout would be empty.
I am trying to download a golang package from github. This is how my playbook looks like
- name: Fetch latest gogs repository
shell: "go get -u github.com/gogits/gogs"
become: true
become_user: git
It is throwing me following error:
{
"changed": true,
"cmd": "go get -u github.com/gogits/gogs",
"delta": "0:00:00.002695",
"end": "2017-08-22 10:50:02.984669",
"failed": true,
"invocation": {
"module_args": {
"_raw_params": "go get -u github.com/gogits/gogs",
"_uses_shell": true,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"warn": true
}
},
"rc": 127,
"start": "2017-08-22 10:50:02.981974",
"stderr": "/bin/sh: go: command not found",
"stderr_lines": [
"/bin/sh: go: command not found"
],
"stdout": "",
"stdout_lines": []
}
When I am trying this
- name: Fetch latest gogs repository
shell: "go get -u {{ gogs_repo }}"
environment:
- PATH: $PATH:/usr/local/go/bin:/usr/bin
- GOPATH: "{{gogs_home}}/{{ gogs_project_directory }}/src"
- GOBIN: "{{gogs_home}}/{{ gogs_project_directory }}/bin"
become: true
become_user: git
I got this error
fatal: [atul-ec2]: FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "Shared connection to ec2-13-126-203-235.ap-south-1.compute.amazonaws.com closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_gntmXa/ansible_module_command.py\", line 220, in <module>\r\n main()\r\n File \"/tmp/ansible_gntmXa/ansible_module_command.py\", line 163, in main\r\n os.chdir(chdir)\r\nOSError: [Errno 13] Permission denied: '/home/ec2-user/goprojects/src/src/github.com/gogits/gogs'\r\n",
"msg": "MODULE FAILURE",
"rc": 1
}
Here my variables are
---
go_version: go1.7.linux-amd64.tar.gz
go_url: https://storage.googleapis.com/golang/{{ go_version }}
go_hash: sha256:702ad90f705365227e902b42d91dd1a40e48ca7f67a2f4b2fd052aaa4295cd95
go_project_dir: goprojects
go_home: "{{ ansible_env.HOME }}"
gogs_home: "/home/git"
gogs_project_directory: "git.varadev.com"
gogs_repo: github.com/gogits/gogs
But when i am using following command on my server
which go
I got this
/usr/local/go/bin/go
and when I try manually go get -u github.com/gogits/gogs, it is working fine.
Hope this can help you as a start point:
---
- hosts: all
connection: local
tasks:
- name: check go version
command: go version
register: result
changed_when: no
ignore_errors: true
- set_fact:
go_path: "{{ lookup('env', 'GOPATH') | default(ansible_env.HOME+'/go', true) }}"
when: not result|failed
- name: go get gogs
shell: go get -u github.com/gogits/gogs
environment:
GOPATH: "{{ go_path }}"
register: gogs
when: not result|failed
- debug: var=gogs
Try to run this on your remote server by typing:
ansible-playbook gogs.yml -i localhost,
If that works then just later try remotely.
Normally you don't want to do this since you want to execute this remotely over ssh, but since you had tried so far and are getting some errors, probably by trying locally connection: local could help to debug more in details this issue.
I know this post is too old, but maybe this may help someone.
Ths issue /bin/sh: go: command not found it is because you are missing some configuration when Ansible runs, and probably you need to source the bash profile like this:
- name: Install gogs
shell: "source ~/.bash_profile && github.com/gogits/gogs"
args:
chdir: /home/{{ owner }}
become_user: '{{ owner }}'
That worked for me.