how to start the service in target server using ansible playbook - ansible

I'm trying to run the following ansible playbook to start the "nexus" service on remote server at path "nexux/bin" it gets failed :
- hosts: nexus
become: yes
become_user: nexus
become_method: sudo
tasks:
- name: changing dir and starting nexus service
shell:
chdir: nexux/bin
executable: ./nexus start
Can someone troubleshoot here to deduce the root cause ?

As the ansible output very clearly told you, in that syntax you did not provide a command. The executable: is designed to be the shell executable, not the "run this thing" argument. It is very clear in the examples section of the fine manual
- shell: cd /opt/nexus/bin && ./nexus start
If you want to use the chdir: option, you must put it under a sibling yaml key to the shell:, like so:
- shell: echo hello world
args:
chdir: /opt/nexus/bin
# I'm omitting the "executable:" key here, because you for sure
# do not want to do that, but if you did, then fine, put it here
Having said all of that, as the docs also indicate, what you really want is to use command: because you are not making use of any special shell characters (redirects, pipes, && phrases, etc), so:
- command: ./nexus start
args:
chdir: /opt/nexus/bin

Try use the shell module, i also recommend to run with nohup and send the output to a file
- shell: |
cd /opt/nexus/bin
nohup ./nexus start > /tmp/nexus.log 2>&1 &

Related

Running terminal command via Ansible playbook

I'm having, what appears to be, a common issue of running shell/terminal commands via an ansible playbook.
If I were to go onto on of my remote machines and type the command on a fresh terminal window, it works, however attempting to do the same via a playbook is having directory issues.
This is essentially the command, but some of it changed a little for privacy, but its essentially an authenticator...
authenticator authenticate user userkeytab
If I try to just run it as shell, I get an error that the authenticator command cant be found in /bin/sh, so I attempted to use chdir to run the command at the default window, (/Users/username).
Here is roughly, the playbook, with one of my failed attempts... I just don’t know what chdir I should be using...
- hosts: all
tasks:
- name: Reauthenticate login
shell: authenticator authenticate user userkeytab
args:
chdir: ~/
ive also tried usr/local/bin.... any thoughts?
can you try with the 'command' module, example below:
- name: Change the working directory to somedir/ and run the command as db_owner if /path/to/database does not exist.
command: /usr/bin/make_database.sh db_user db_name
become: yes
become_user: db_owner
args:
chdir: somedir/
creates: /path/to/database
Resource:
https://docs.ansible.com/ansible/latest/modules/command_module.html

Whats the difference between ansible 'raw', 'shell' and 'command'?

What is the difference between raw, shell and command in the ansible playbook? And when to use which?
command: executes a remote command on the target host, in the same shell of other playbook's tasks.
It can be used for launch scripts (.sh) or for execute simple commands. For example:
- name: Cat a file
command: cat somefile.txt
- name: Execute a script
command: somescript.sh param1 param2
shell: executes a remote command on the target host, opening a new shell (/bin/sh).
It can be used if you want to execute more complex commands, for example, commands concatenated with pipes. For example:
- name: Look for something in a file
shell: cat somefile.txt | grep something
raw: executes low-level commands where the interpreter is missing on the target host, a common use case is for installing python. This module should not be used in all other cases (where command and shell are suggested)
Since I were I stumbling about the same question, I wanted to share my findings here too.
The command and shell module, as well gather_facts (annot.: setup.py) depend on a properly installed Python interpreter on the Remote Node(s). If that requirement isn't fulfilled one may experience errors were it isn't possible to execute
python <ansiblePython.py>
In a Debian 10 (Buster) minimal installation i.e., python3 was installed but the symlink to python missing.
To initialize the system correctly before applying all other roles, I've used an approach with the raw module
ansible/initSrv/main.yml
- hosts: "{{ target_hosts }}"
gather_facts: no # is necessary because setup.py depends on Python too
pre_tasks:
- name: "Make sure remote system is initialized correctly"
raw: 'ln -s /usr/bin/python3 /usr/bin/python'
register: set_symlink
failed_when: set_symlink.rc != 0 and set_symlink.rc != 1
which is doing something like
/bin/sh -c 'ln -s /usr/bin/python3 /usr/bin/python'
on the remote system.
Further Documentation
raw module – Executes a low-down and dirty command
A common case is installing python on a system without python installed by default.
... but not only restricted to that
Playbook Keyword - pre_tasks
A list of tasks to execute before roles.
Set the order of task execution in Ansible

File created by ansible script module not present after running against remote node

I'm setting up some server in AWS, and want to use Ansible to do some shell in remote nodes. I write playbook as follow
- hosts: remote-nodes
tasks:
- name: Execute script
script: /home/ubuntu/FastBFT_ethereum/experiment/a.sh
remote nodes a.sh as follow
#!/usr/bin/env bash
echo "test">> test.txt
python writejson.py
But when I check the test.text, I find it doesn't work in remote nodes.help me please.
Assuming that you want test.txt to be created in the experiment directory, this should be changed to something like:
- hosts: remote-nodes
tasks:
- name: Execute script
script: /home/ubuntu/FastBFT_ethereum/experiment/a.sh
args:
chdir: /home/ubuntu/FastBFT_ethereum/experiment

Using ssh-keyscan in shell module does not produce any output in Ansible

I'm trying to follow this solution to add use the shell module and ssh-keyscan to add a key to my known_hosts file of a newly created EC2 instance.
After trying to do this multiple ways as listed on that question I eventually ran just the ssh-keyscan command using the shell module without the append. I am getting no output from this task:
- name: accept new ssh fingerprints
shell: ssh-keyscan -H {{ item.public_ip }}
args:
executable: /bin/bash
with_items: "{{ ec2.instances }}"
register: keyscan
- debug: var=keyscan
Debug here shows nothing in stdout and stdout_lines and nothing in stderr and stderr_lines
Note: I tried running this with the bash as the executable shown after reading that the shell module defaults to /bin/sh which is the dash shell on my Linux Mint VirtualBox. But it's the same regardless.
I have tested the shell command with the following task and I see the proper output in stdout and stdout_lines:
- name: test the shell
shell: echo hello
args:
executable: /bin/bash
register: hello
- debug: var=hello
What is going on here? Running ssh-keyscan in a terminal (not through Ansible) works as expected.
EDIT: Looking at the raw_params output from debug shows ssh-keyscan -H x.x.x.x and copying and pasting this into the terminal works as expected.
The answer is that it doesn't work the first time. While researching another method I stumbled across the retries keyword in ansible that allows a retry of whatever command. I tried this and on attempt number 2 in the retry loop it is working.

Stuck on debugging an ansible task running remote command that freezes

I'm setting up an Ansible role to install Ahsay Offsite Backup Server.
After downloading and extracting the compressed file containing the software, I need to run the install script. I've determined that it's a step early in the script where it checks that your current user has appropriate permissions which is failing to run.
When I run the playbook, the final task never finishes.
The role
- name: Check if OBS install files have already been downloaded
stat:
path: /tmp/obs/version.txt
register: stat_result
- name: Ensures /tmp/obs exists
file: path=/tmp/obs state=directory
- name: Download and extract OBS install files
unarchive:
src: https://ahsay-dn.ahsay.com/v6/obsr/62900/obsr-nix.tar.gz
dest: /tmp/obs
remote_src: true
validate_certs: no
when: stat_result.stat.exists == false
- name: Install OBS
command: bash -lc "/tmp/obs/bin/install.sh > /tmp/install_output.log"
The playbook configuration is for all tasks to become sudo.
If I run the command in a shell on the remote host, it executes successfully.
I've hit similar issues before where commands fail because (in the case of rvm) it requires the bash_profile to load and pull in a bunch of environment variables first. The fix for that was as I've done above, to wrap the command in bash -lc "...", but that hasn't helped this time.
I'd love any suggestions of how I could continue troubleshooting this one.
you are checking for file presence before ensuring the folder.
some applications require tty, and when not on it they ask some stupid question
to really debug while the command is "stuck" connect to the offending machine, and try analyzing what does the script do: look in its /proc/${PID} folder (if you're on linux), maybe connect to it via strace -p ${PID} and maybe dup its stderr to see maybe it prints something that makes sense to you.
Also, you don't really have to run command, you can use shell module, and specify its args to make sure the command runs from specific folder, like so:
- name: Install OBS
shell: |
./bin/install.sh \
1> /tmp/install.output.log \
2> /tmp/install.error.log
args:
executable: /bin/bash
chdir: /tmp/obs

Resources