unable to run playbooks on rancherOS - ansible

Im trying to run playbooks on ranchesOS server.
- hosts: rancher
tasks:
- name: Insert/Update eth0 configuration stanza in /etc/network/interfaces
blockinfile:
path: ros.yaml
block: |
#cloud-config
rancher:
console: ubuntu
runcmd:
- [ apt-get, install, -y, curl ]
- [ apt-get, install, -y, zip ]
- name: merge
become: yes
command: ros config merge -i ros.yaml
- name: Reboot immediately if there was a change.
shell: "sleep 5 && reboot"
register: reboot
- debug:
msg=reboot.stdout
- name: Wait for the reboot to complete if there was a change.
become: yes
wait_for_connection:
connect_timeout: 20
sleep: 5
delay: 5
timeout: 300
while running this playbook is executing successfully but server is not coming up.

Related

How to send logs from rsyslog to kafka?

I wrote an ansible that disables Selenix and ufw for the host and runs a bash script. The bash script takes an integer input and prints that number of lines, which includes the timestamp and 32 random characters. Now I want this output to go to the udp port and rsyslog send the logs to Kafka. So I don't know how to do the last task. I would be very grateful if you could help me.
Thanks
Ansible playbook:
---
- name: My playbook
hosts: all
become: yes
tasks:
- name: Disabling SELinux state
selinux:
state: disabled
- name: Stop and disable firewalld.
service:
name: ufw
state: stopped
enabled: False
- name: Copy unit file
ansible.builtin.copy:
src: /home/AnsiblePlaybooks/files/mytest.service
dest: /etc/systemd/system/mytest.service
owner: root
group: root
mode: '0644'
- name: Copy app file
template:
src: /home/ansible-test/AnsiblePlaybooks/app.sh
dest: /home/app.sh
owner: root
group: root
mode: '0644'
- name: reload daemons
ansible.builtin.systemd:
daemon_reload: yes
- name: Enable mytest.service
ansible.builtin.systemd:
name: mytest.service
state: started
enabled: yes
bash script:
#!/usr/bin/env bash
num = {{ input_var }}
for i in $(seq 1 $num) ; do
echo =============================
echo "$i: $(date +%Y-%m-%d-%H:%M:%S) $(openssl rand -hex 16)"
sleep 0.5
done
You can setup Kafka Connect (or create your own producer) to send the data to Kafka; You'll use Ansible to setup the host and deploy the configuration files for the connector.

Ansible playbook for installing cyberpanel stops execution

Greetings for the day,
I was trying to install cyberpanel using Ansible by writing a playbook.
The playbook was this
---
- name: Installing cybepanel
hosts: ansible_client
user: ubuntu
become: yes
become_user: root
become_method: sudo
tasks:
- name: Installing screen
apt:
name: screen
state: present
- name: Download the script
get_url:
url=https://cyberpanel.net/install.sh
dest=/root/installer.sh
- name: Execute the script
become: yes
become_method: su
become_user: root
become_exe: sudo su -
expect:
command:
screen -S cyberpanel-installation
sh installer.sh
echo: yes
responses:
(.*) Please enter the number(>*): "1"
'Full installation \[Y/n\]:': "Y"
(.*) Remote MySQL(.*): "N"
(.*)Enter specific version such as:(.*): ""
(.*)Choose(.*)password(.*): "r"
'Please select \[Y/n\]:': "Y"
(.*)Please type Yes or no(.*): "no"
'Would you like to restart your server now? \[y/N\]:': "y"
async: 1800
poll: 5
register: output
- name: 'Checking the status'
async_status:
jid: "{{ output.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: 150
delay: 60
- name: debugging
debug:
var=output
The playbook doesn't have any error or conflicts.
The playbook works fine and the cyberpanel is installing with in 20-30 mins(As there is screen in the playbook. The screen stays detached in the destination server and after attaching it (when the playbook stops execution) in the destination server we could see that the installation in progress And successfully completes with in 20-30 mins.
The issue is that the playbook stops execution after 1 minutes of execution with a return code(rc)=0.
This is the output of playbook.
As you can see i am using the async method with poll=0 and poll>0 for long time execution of the script. It is not working the playbook still timesout.
I also increased the SSH timeout to check whether any ssh timeout takes place or not and there is no ssh timeout too.
Also tried using the timeout attribute instead of async method that also don't worked for me.
Anybody with a helping hand is well appreciated.

What is the difference between `sudo` and `become` for privilege escalation? [duplicate]

I have an ansible play file that has to performs two tasks, first on the local machine get the disk usage and another task is to get the disk usage of a remote machine and install apache2 in the remote machine.
When I am trying to run the file I am getting the error "ERROR! 'sudo' is not a valid attribute for a Play"
When I remove the sudo and apt section from the yml file, it is running fine.
I am using ansible 2.9.4. Below are two playbook files:
File running without any error,
---
-
connection: local
hosts: localhost
name: play1
tasks:
-
command: "df -h"
name: "Find the disk space available"
-
command: "ls -lrt"
name: "List all the files"
-
name: "List All the Files"
register: output
shell: "ls -lrt"
-
debug: var=output.stdout_lines
-
hosts: RemoteMachine1
name: play2
tasks:
- name: "Find the disk space"
command: "df -h"
register: result
- debug: var=result.stdout_lines
File running with error:
---
-
connection: local
hosts: localhost
name: play1
tasks:
-
command: "df -h"
name: "Find the disk space available"
-
command: "ls -lrt"
name: "List all the files"
-
name: "List All the Files"
register: output
shell: "ls -lrt"
-
debug: var=output.stdout_lines
-
hosts: RemoteMachine1
name: play2
sudo: yes
tasks:
- name: "Find the disk space"
command: "df -h"
register: result
- name: "Install Apache in the remote machine"
apt: name=apache2 state=latest
- debug: var=result.stdout_lines
Complete error message:
ERROR! 'sudo' is not a valid attribute for a Play
The error appears to be in '/home/Documents/ansible/play.yml': line 20, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
-
hosts: RemoteMachine1
^ here
Ansible play keyword sudo was (long ago) deprecated with warnings in version 2.0 and removed in version 2.2
See the actual supported play keywords. Use:
become: true
- hosts: RemoteMachine1
name: play2
become: yes
tasks:
- name: "Find the disk space"
command: "df -h"
register: result
- name: "Install Apache in the remote machine"
apt: name=apache2 state=latest
- debug: var=result.stdout_lines
use become: yes it will run your tasks as the root user.
Become directives

Ansible task for checking that a host is really offline after shutdown

I am using the following Ansible playbook to shut down a list of remote Ubuntu hosts all at once:
- hosts: my_hosts
become: yes
remote_user: my_user
tasks:
- name: Confirm shutdown
pause:
prompt: >-
Do you really want to shutdown machine(s) "{{play_hosts}}"? Press
Enter to continue or Ctrl+C, then A, then Enter to abort ...
- name: Cancel existing shutdown calls
command: /sbin/shutdown -c
ignore_errors: yes
- name: Shutdown machine
command: /sbin/shutdown -h now
Two questions on this:
Is there any module available which can handle the shutdown in a more elegant way than having to run two custom commands?
Is there any way to check that the machines are really down? Or is it an anti-pattern to check this from the same playbook?
I tried something with the net_ping module but I am not sure if this is its real purpose:
- name: Check that machine is down
become: no
net_ping:
dest: "{{ ansible_host }}"
count: 5
state: absent
This, however, fails with
FAILED! => {"changed": false, "msg": "invalid connection specified, expected connection=local, got ssh"}
In more restricted environments, where ping messages are blocked you can listen on ssh port until it goes down. In my case I have set timeout to 60 seconds.
- name: Save target host IP
set_fact:
target_host: "{{ ansible_host }}"
- name: wait for ssh to stop
wait_for: "port=22 host={{ target_host }} delay=10 state=stopped timeout=60"
delegate_to: 127.0.0.1
There is no shutdown module. You can use single fire-and-forget call:
- name: Shutdown server
become: yes
shell: sleep 2 && /sbin/shutdown -c && /sbin/shutdown -h now
async: 1
poll: 0
As for net_ping, it is for network appliances such as switches and routers. If you rely on ICMP messages to test shutdown process, you can use something like this:
- name: Store actual host to be used with local_action
set_fact:
original_host: "{{ ansible_host }}"
- name: Wait for ping loss
local_action: shell ping -q -c 1 -W 1 {{ original_host }}
register: res
retries: 5
until: ('100.0% packet loss' in res.stdout)
failed_when: ('100.0% packet loss' not in res.stdout)
changed_when: no
This will wait for 100% packet loss or fail after 5 retries.
Here you want to use local_action because otherwise commands are executed on remote host (which is supposed to be down).
And you want to use trick to store ansible_host into temp fact, because ansible_host is replaced with 127.0.0.1 when delegated to local host.

ansible rolling restart playboook

Folks,
I'd like to have a service be restarted individually on each host, and wait for user input before continuing onto the next host in the inventory.
Currently, if you have the following:
- name: Restart something
command: service foo restart
tags:
- foo
- name: wait
pause: prompt="Make sure org.foo.FooOverload exception is not present"
tags:
- foo
It will only prompt once, and not really have the effect desired.
What is the proper ansible syntax to wait for user input before running the restart task on each host?
Use a combination of serial attribute and step option of a playbook.
playbook.yml
- name: Do it
hosts: myhosts
serial: 1
tasks:
- shell: hostname
Call the playbook with --step option
ansible-playbook playbook.yml --step
You will be prompted for every host.
Perform task: shell hostname (y/n/c): y
Perform task: shell hostname (y/n/c): ****************************************
changed: [y.y.y.y]
Perform task: shell hostname (y/n/c): y
Perform task: shell hostname (y/n/c): ****************************************
changed: [z.z.z.z]
For more information: Start and Step
I went ahead with this:
- name: Restart Datastax Agent
tags:
- agent
hosts: cassandra
sudo: yes
serial: 1
gather_facts: yes
tasks:
- name: Pause
pause: prompt="Hit RETURN to restart datastax agent on {{ inventory_hostname }}"
- name: Restarting Datastax Agent on {{ inventory_hostname }}
service: name=datastax-agent state=restarted

Resources