ansible chdir module not working on remote server? - ansible

I am using a ansible playbook and it is running fine on my local system but when I run the same playbook on a remote server then it fails due to a error given by chdir module i.e "msg": [Errno 2] No such file or directory, "rc": 2.
Please if anyone can help me figure out what is the exact issue here.
- hosts: all
vars:
test: "Test Successfull"
repo_dir: /media/disk1/sandbox/xyz
path: /media/disk1/sandbox/xyz/api
tasks:
- debug:
msg: "{{ test.split()[0] }} {{ test.split()[1] }}"
- name: Running npm install in directory "{{ path }} and {{ repo_dir }}/lib as well"
command: npm install
args:
chdir: "{{ item }}"
loop:
- "{{ path }}"
- "{{ repo_dir }}/lib"
become_user: yash
become: yes

Please try as below.
- name: Running npm install
npm:
path:"{{ package_path }}"
executable: "{{ npm_path }}"
where {{ package_path }} is path to package.json file and {{npm_path}} is /usr/bin/npm( as per your npm path)

Related

Issue with AWX not with command line

I have a role in ansible working in command line but not through awx.
Here the role :
- name: Enable persistent logging
ansible.builtin.lineinfile:
path: /etc/systemd/journald.conf
regexp: '^#Storage'
line: Storage=persistent
- name: Check directory
ansible.builtin.stat:
path: "{{ journal_dir }}"
register: journaldir
- block:
- name: Create directory
ansible.builtin.file:
path: "{{ journal_dir }}"
state: directory
mode: '0755'
- name: Enable systemd-tmpfiles folder
ansible.builtin.command: /bin/systemd-tmpfiles --create --prefix {{ journal_dir }}
check_mode: no
notify:
- restart systemd-journald
when: journaldir.stat.exists == false and ansible_distribution_major_version >= '7'
Here the notify code :
- name: restart systemd-journald
ansible.builtin.service:
name: systemd-journald
state: restarted
{{ journal_dir }} is /var/log/journal
I have no issue when I run the playbook on my terminal, but when I run it with awx, I still have this error :
TASK [journalctl : Enable systemd-tmpfiles folder] *****************************
fatal: [server]: FAILED! => {"changed": false, "msg": "no command given", "rc": 256}
I have done test also with shell module, it's the same behaviour.
And I don't understand why.
thank you for your help.
I foud the issue, it seems I'm using an old version of ansible, so I had to remove the fqcn for the command module.
It works like this :
- name: Enable systemd-tmpfiles folder
command: /bin/systemd-tmpfiles --create --prefix {{ journal_dir }}
notify:
- restart systemd-journald
my ansible version through command line is 2.9.27,ansible version of awx is 2.9.14.
[solution has been found here] : Ansible cant run any command or shell

Execute curl command in ansible playbook

I am trying to Install RKE2 with Ansible.
but the command for installing RKE2 aren't apt commands
command for installing rke2 is
curl -sfL https://get.rke2.io | INSTALL_RKE2_VERSION=v1.21.6+rke2r1 sh -
I have no idea how to convert this command into playbook code
I did checked other posts about curl - ansible but it doesn't help on my case
Can this work ?
vars:
script_dir: mydir
tasks:
- file:
state: directory
path: "{{ script_dir }}"
- name: download RKE2
get_url:
url: https://get.rke2.io
validate_certs: false
dest: "{{ script_dir }}/install.sh"
mode: 0755
- name: install RKE2
command: "{{ script_dir }}/install.sh"
environment:
INSTALL_RKE2_VERSION: v1.21.6+rke2r1

ansible create local directory (master)

I forgot how to create on localhost ( ansible svr ) directory.
I'am using my ansible server as cache to download file and to copy them after that to the remote hosts.
Here example of task and playbook
tasks
- name: Create temp folder
file:
path: "{{ item }}"
state: directory
mode: 0755
with_items:
- /tmp/foo/
playbook
- hosts: foo
roles:
- foo
Tryed with this but doesn't work:
- name: Create temp folder
file:
path: "{{ item }}"
state: directory
mode: 0755
remote_src: no
with_items:
- /tmp/foo/
Thanks
I have found the solution delegate_to: localhost
- name: Create temp folder
file:
path: "{{ item }}"
state: directory
mode: 0755
delegate_to: localhost
with_items:
- /tmp/foo/

How can I run an Ansible playbook without hosts specificed?

I'm writing a playbook that spins up X number of EC2 AWS instances then just installs some software on them (apt packages and pip modules). When I run my playbook, it executes the shell commands on my local system because Ansible won't run unless I specify a host and I put localhost.
In the playbook, I've tried specifying "hosts: all" at the top-level, but this just makes the playbook run for a second without doing anything.
playbook.yml
- name: Spin up spot instances
hosts: localhost
connection: local
vars_files: ansible-vars.yml
tasks:
- name: create {{ spot_count }} spot instances with spot_price of ${{ spot_price }}
local_action:
module: ec2
region: us-east-2
spot_price: '{{ spot_price }}'
spot_wait_timeout: 180
keypair: My-keypair
instance_type: t3a.nano
image: ami-0f65671a86f061fcd
group: Allow from Home
instance_initiated_shutdown_behavior: terminate
wait: yes
count: '{{ spot_count }}'
register: ec2
- name: Wait for the instances to boot by checking the ssh port
wait_for: host={{item.public_ip}} port=22 delay=15 timeout=300 state=started
with_items: "{{ ec2.instances }}"
- name: test whoami
args:
executable: /bin/bash
shell: whoami
with_items: "{{ ec2.instances }}"
- name: Update apt
args:
executable: /bin/bash
shell: apt update
become: yes
with_items: "{{ ec2.instances }}"
- name: Install Python and Pip
args:
executable: /bin/bash
shell: apt install python3 python3-pip -y
become: yes
with_items: "{{ ec2.instances }}"
- name: Install Python modules
args:
executable: /bin/bash
shell: pip3 install bs4 requests
with_items: "{{ ec2.instances }}"
ansible-vars.yml
ansible_ssh_private_key_file: ~/.ssh/my-keypair.pem
spot_count: 1
spot_price: '0.002'
remote_user: ubuntu
The EC2 instances get created just fine and the "wait for SSH" task works, but the shell tasks get run on my local system instead of the remote hosts.
How can I tell Ansible to connect to the EC2 instances without using a hosts file since we're creating them on the fly?
Can you try this if it works.
- name: test whoami
args:
executable: /bin/bash
shell: whoami
delegate_to: "{{ item }}"
with_items: "{{ ec2.instances }}"

Ansible complains about "The MySQL-python module is required"

I have Ansible 2.6.1 installed in my local machine (WSL; Ubuntu):
ansible 2.6.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansile/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
My target machine is running Ubuntu 16.04-LTS.
I'm using this task to install python3-mysqldb:
- name: "Debian | Install Mysql Client package"
apt:
name: "{{ item }}"
state: present
with_items:
- mysql-client
- python3-dev
- libmysqlclient-dev
- python3-mysqldb
when:
- zabbix_server_database == 'mysql'
tags:
- zabbix-server
- init
- database
It fails in this task:
- name: "MySQL | Create database and import file >= 3.0"
mysql_db:
name: "{{ zabbix_server_dbname }}"
encoding: "{{ zabbix_server_dbencoding }}"
collation: "{{ zabbix_server_dbcollation }}"
state: import
target: "{{ ls_output_create.stdout }}"
when:
- zabbix_version is version_compare('3.0', '>=')
- zabbix_database_sqlload
- not done_file.stat.exists
delegate_to: "{{ delegated_dbhost }}"
tags:
- zabbix-server
- database
Here's the fail message:
fatal: [target_host -> target_host-db]: FAILED! => {"changed": false, "msg": "The MySQL-python module is required."}
I can confirm python3-mysqldb was indeed installed:
/.
/usr
/usr/share
/usr/share/doc
/usr/share/doc/python3-mysqldb
/usr/share/doc/python3-mysqldb/changelog.Debian.gz
/usr/share/doc/python3-mysqldb/copyright
/usr/lib
/usr/lib/python3
/usr/lib/python3/dist-packages
/usr/lib/python3/dist-packages/mysqlclient-1.3.7.egg-info
/usr/lib/python3/dist-packages/mysqlclient-1.3.7.egg-info/top_level.txt
/usr/lib/python3/dist-packages/mysqlclient-1.3.7.egg-info/PKG-INFO
/usr/lib/python3/dist-packages/mysqlclient-1.3.7.egg-info/dependency_links.txt
/usr/lib/python3/dist-packages/_mysql_exceptions.py
/usr/lib/python3/dist-packages/_mysql.cpython-35m-x86_64-linux-gnu.so
/usr/lib/python3/dist-packages/MySQLdb
/usr/lib/python3/dist-packages/MySQLdb/connections.py
/usr/lib/python3/dist-packages/MySQLdb/release.py
/usr/lib/python3/dist-packages/MySQLdb/cursors.py
/usr/lib/python3/dist-packages/MySQLdb/constants
/usr/lib/python3/dist-packages/MySQLdb/constants/ER.py
/usr/lib/python3/dist-packages/MySQLdb/constants/CLIENT.py
/usr/lib/python3/dist-packages/MySQLdb/constants/REFRESH.py
/usr/lib/python3/dist-packages/MySQLdb/constants/FIELD_TYPE.py
/usr/lib/python3/dist-packages/MySQLdb/constants/FLAG.py
/usr/lib/python3/dist-packages/MySQLdb/constants/__init__.py
/usr/lib/python3/dist-packages/MySQLdb/constants/CR.py
/usr/lib/python3/dist-packages/MySQLdb/converters.py
/usr/lib/python3/dist-packages/MySQLdb/compat.py
/usr/lib/python3/dist-packages/MySQLdb/__init__.py
/usr/lib/python3/dist-packages/MySQLdb/times.py
I also tried installing the python package MySQL-python using pip but I also got the same error message.
I'm stumped. I don't know what to do anymore.
EDIT: I also tried installing Python 2.7.x on the target machine and made sure that /usr/bin/python is symlinked to Python 2.7.x but I'm still getting the same error. I'm using DJ Wasabi's zabbix-server role
I think you are mixing things up with your delegation. I would simplify things.
Option one: run everything locally. Assumes your DB server is reachable through the network:
- hosts: localhost
connection: local
tasks:
- name: "Debian | Install Mysql Client package"
apt:
name: "{{ item }}"
state: present
with_items:
- mysql-client
- python3-dev
- libmysqlclient-dev
- python3-mysqldb
when:
- zabbix_server_database == 'mysql'
tags:
- zabbix-server
- init
- database
- name: "MySQL | Create database and import file >= 3.0"
mysql_db:
name: "{{ zabbix_server_dbname }}"
encoding: "{{ zabbix_server_dbencoding }}"
collation: "{{ zabbix_server_dbcollation }}"
state: import
target: "{{ ls_output_create.stdout }}"
when:
- zabbix_version is version_compare('3.0', '>=')
- zabbix_database_sqlload
- not done_file.stat.exists
tags:
- zabbix-server
- database
Option two: run the SQL commands from the DB server (then you don't need mysql-python on your local machine, but you need python and mysql-python on the remote server hosting MySQL):
- hosts: dbserver
tasks:
- name: "Debian | Install Mysql Client package"
apt:
name: "{{ item }}"
state: present
with_items:
- mysql-client
- python3-dev
- libmysqlclient-dev
- python3-mysqldb
when:
- zabbix_server_database == 'mysql'
tags:
- zabbix-server
- init
- database
- name: "MySQL | Create database and import file >= 3.0"
mysql_db:
name: "{{ zabbix_server_dbname }}"
encoding: "{{ zabbix_server_dbencoding }}"
collation: "{{ zabbix_server_dbcollation }}"
state: import
target: "{{ ls_output_create.stdout }}"
when:
- zabbix_version is version_compare('3.0', '>=')
- zabbix_database_sqlload
- not done_file.stat.exists
tags:
- zabbix-server
- database
add ansible_python_interpreter in ansible.cfg as follows:
[test-server]
server1 ansible_ssh_host=x.x.x.x ansible_ssh_user=test ansible_python_interpreter=/usr/bin/python3

Resources