Execute fabric commands from Karaf terminal using ansible - ansible

In ansible how can we execute fabric commands in Karaf terminal.
Actually in ansible we will execute shell commands, but there are some commands I need to execute with Karaf terminal. Is there any possibility to do.
in generic, how to open other terminal other than shell using ansible playbook

Karaf exposes sshd server, you can use it to call commands from Ansible.
inventory:
test ansible_host=192.168.0.15
test-karaf ansible_host=127.0.0.1 ansible_port=8101 ansible_user=karafuser ansible_password=karafpassword ansible_ssh_common_args="-o ProxyCommand='ssh 192.168.0.15 -W %h:%p'"
playbook:
- hosts: test
gather_facts: no
tasks:
- shell: ps aux | grep [b]in/karaf
- raw: system:version
delegate_to: test-karaf
This will grep for karaf process on test host and execute system:version command inside karaf shell on that host.

Related

Whats the difference between ansible 'raw', 'shell' and 'command'?

What is the difference between raw, shell and command in the ansible playbook? And when to use which?
command: executes a remote command on the target host, in the same shell of other playbook's tasks.
It can be used for launch scripts (.sh) or for execute simple commands. For example:
- name: Cat a file
command: cat somefile.txt
- name: Execute a script
command: somescript.sh param1 param2
shell: executes a remote command on the target host, opening a new shell (/bin/sh).
It can be used if you want to execute more complex commands, for example, commands concatenated with pipes. For example:
- name: Look for something in a file
shell: cat somefile.txt | grep something
raw: executes low-level commands where the interpreter is missing on the target host, a common use case is for installing python. This module should not be used in all other cases (where command and shell are suggested)
Since I were I stumbling about the same question, I wanted to share my findings here too.
The command and shell module, as well gather_facts (annot.: setup.py) depend on a properly installed Python interpreter on the Remote Node(s). If that requirement isn't fulfilled one may experience errors were it isn't possible to execute
python <ansiblePython.py>
In a Debian 10 (Buster) minimal installation i.e., python3 was installed but the symlink to python missing.
To initialize the system correctly before applying all other roles, I've used an approach with the raw module
ansible/initSrv/main.yml
- hosts: "{{ target_hosts }}"
gather_facts: no # is necessary because setup.py depends on Python too
pre_tasks:
- name: "Make sure remote system is initialized correctly"
raw: 'ln -s /usr/bin/python3 /usr/bin/python'
register: set_symlink
failed_when: set_symlink.rc != 0 and set_symlink.rc != 1
which is doing something like
/bin/sh -c 'ln -s /usr/bin/python3 /usr/bin/python'
on the remote system.
Further Documentation
raw module – Executes a low-down and dirty command
A common case is installing python on a system without python installed by default.
... but not only restricted to that
Playbook Keyword - pre_tasks
A list of tasks to execute before roles.
Set the order of task execution in Ansible

how to install the script interactive via ansible playbook

I can run the script with the command line argument on the linux server it works fine.
for e.g.: ./install.sh -n -I <IP address of the server>
The above command is able to install the script on the server.
When I am trying to do via ansible (version 2.5) playbook using the shell module it gives me an argument error.
- name: Running the script
shell: yes | ./fullinstall
Expect modules has been tried.
--my-arg1=IP address
- shell: "./install.sh -n -I"
args:
chdir: somedir/
creates: somelog.txt
You can look here for examples.
You can also place the install.sh file on the server as a template. Then you can set the variables as desired in Jinja2.
- name: Template install.sh
template:
src: /install.sh.j2
dest: /tmp/install.sh
- shell: "cd /tmp/ ; ./install.sh
Your install.sh.j2 contains:
IP adres: {{ my_ip }}
And set the variable on the command line with:
ansible-playbook -e my_ip="192.168.0.1"
Use command module
- name: run script
command: /path/to/install.sh -n -I {{ ip_addrress }}
playbook
ansible-playbook -e ip_address="192.168.3.9" play.yml
If you want to interactively wanted to enter the IP address, use prompt module.

Shell command works manually, not using Ansible

I'm playing with Ansible (still learning), but I encountered a problem I can't think of a solution.
I'm trying to install and launch Tomcat on a remote server using Ansible.
The installation is working, but the last step which is the activation of the Tomcat server is failing.
If I manually launch the startup.sh script (as su -), using the following command : bash /opt/tomcat/startup.sh, I can see the tomcat homepage.
Using the ansible playbook I wrote, even though Ansible doesn't show up any errors, I can't see the tomcat homepage.
Here is the task I'm running :
- name: Launch Tomcat
command: bash /opt/tomcat/startup.sh
become: true
I tried to add become_user: root and become_method: sudo with no success.
I think it may be related to how become: true is handled by ansible but I'm not sure.
Have you also tried using the shell-module instead of the command-module?
With the Command module the command will be executed without being proceeded through a shell. As a consequence some variables like $HOME are not available. And also stream operations like <, >, | and & will not work.
The Shell module runs a command through a shell, by default /bin/sh. This can be changed with the option executable. Piping and redirection are here therefor available.
(Source: https://blog.confirm.ch/ansible-modules-shell-vs-command/)
There might be a problem with the environment. "sudo su" is different from "su -" where
-, -l, --login Provide an environment similar to what the user would expect had the user logged in directly.
Try shell (because it allows pipes, redirection, logical operations, ...) without become: true
shell: su - && bash /opt/tomcat/startup.sh
Make sure remote_user is the same whom the su - command works fine for.
i had the same problem while working with startup.sh in ansible script.i got to know the tomcat server process got starts but immediately shutdown as well.
so the solution to the problem is running or starting the tomcat server in nohup
thru ansible
Here is the sample script.
cat start.yml
---
- name: Playbook to stop server
#hosts: localhost
hosts: webserver
tasks:
- name: Start the server tomcat from UI
shell:
nohup /home/tomcat/bin/catalina.sh start >> /home/tomcat/somelog

Execute a bash script with arguments in Ansible

I am new to Ansible. I have a bash script which has three arguments to be passed. I have to run this bash script on the remote server from Ansible.
Basically, I want to declare the hostname, duration and the comment fields as arguments while executing the Ansible command. I don't want to edit the file, as I am doing it from a Slack channel.
- hosts: nagiosserver
tasks:
- name: Executing a script
command: sh /home/aravind/downtime.sh {hostname} {duration} {comments}
If you're executing ansible via ansible-playbook myplay.yml, you can pass additional variables via -e varname=varvalue. A lazy fix would be to run with
ansible-playbook myplay.yml -e my_hostname=foo -e my_duration=bar -e my_comments=foobar
But you should consider that the hostname is already defined in your inventory or gathered facts.
So you could update your playbook to use these additional variables using
- hosts: nagiosserver
tasks:
- name: Executing a script
- command: "sh /home/aravind/downtime.sh {{my_hostname}} {{my_duration}} {{my_comments}}"

How to process output from Ansible using local script

I want to run some command that return some data and process these data somehow with a script that is located on ansible server. How could I do that?
For example:
I want to run
ansible all -a "cat /etc/redhat-release"
Then I want to call script called version_parser.py (located on local ansible server, not host where ansible is executing the command) with parameters name_of_server and pipe the output of this call as input.
So that in reality I get something similar like
ssh server1 "cat /etc/redhat-release" | version_parser.py server1
ssh server2 "cat /etc/redhat-release" | version_parser.py server2
...
What is most easy approach to do something like this?
You could run the remote command and store the result in a variable. Next you can run a local_action and execute your local script with the stored variable:
---
- name: Run remote command
command: "bash -c 'ls -l /etc/init.d/a* | grep -c app'"
register: store
- name: Run result against local script
local_action: "shell echo '{{ store.stdout }}' | /path/to/local/parser.py {{ inventory_hostname }}"

Resources