How to send logs from rsyslog to kafka? - bash

I wrote an ansible that disables Selenix and ufw for the host and runs a bash script. The bash script takes an integer input and prints that number of lines, which includes the timestamp and 32 random characters. Now I want this output to go to the udp port and rsyslog send the logs to Kafka. So I don't know how to do the last task. I would be very grateful if you could help me.
Thanks
Ansible playbook:
---
- name: My playbook
hosts: all
become: yes
tasks:
- name: Disabling SELinux state
selinux:
state: disabled
- name: Stop and disable firewalld.
service:
name: ufw
state: stopped
enabled: False
- name: Copy unit file
ansible.builtin.copy:
src: /home/AnsiblePlaybooks/files/mytest.service
dest: /etc/systemd/system/mytest.service
owner: root
group: root
mode: '0644'
- name: Copy app file
template:
src: /home/ansible-test/AnsiblePlaybooks/app.sh
dest: /home/app.sh
owner: root
group: root
mode: '0644'
- name: reload daemons
ansible.builtin.systemd:
daemon_reload: yes
- name: Enable mytest.service
ansible.builtin.systemd:
name: mytest.service
state: started
enabled: yes
bash script:
#!/usr/bin/env bash
num = {{ input_var }}
for i in $(seq 1 $num) ; do
echo =============================
echo "$i: $(date +%Y-%m-%d-%H:%M:%S) $(openssl rand -hex 16)"
sleep 0.5
done

You can setup Kafka Connect (or create your own producer) to send the data to Kafka; You'll use Ansible to setup the host and deploy the configuration files for the connector.

Related

unable to run playbooks on rancherOS

Im trying to run playbooks on ranchesOS server.
- hosts: rancher
tasks:
- name: Insert/Update eth0 configuration stanza in /etc/network/interfaces
blockinfile:
path: ros.yaml
block: |
#cloud-config
rancher:
console: ubuntu
runcmd:
- [ apt-get, install, -y, curl ]
- [ apt-get, install, -y, zip ]
- name: merge
become: yes
command: ros config merge -i ros.yaml
- name: Reboot immediately if there was a change.
shell: "sleep 5 && reboot"
register: reboot
- debug:
msg=reboot.stdout
- name: Wait for the reboot to complete if there was a change.
become: yes
wait_for_connection:
connect_timeout: 20
sleep: 5
delay: 5
timeout: 300
while running this playbook is executing successfully but server is not coming up.

ansible error 'first argument must be string or compiled pattern'

I have this code in my playbook:
- hosts: standby
remote_user: root
tasks:
- name: replace hostname in config
replace:
path: /opt/agentd.conf
regexp: #\s+Hostname\=
replace: Hostname={{hname}}
backup: yes
- name: add database array in files
lineinfile:
path: /opt/zabbix_agent/share/scripts/{{ item }}
line: 'DBNAME_ARRAY=( {{dbname}} )'
insertafter: DB2PATH=/home/db2inst1/sqllib/bin/db2
backup: yes
with_items:
- Connections
- HadrAndLog
- Memory
- Regular
- name: restart service
shell: /etc/init.d/agent restart
register: command_output
become: yes
become_user: root
tags: restart
- debug: msg="{{command_output.stdout_lines}}"
tags: set_config_st
it will replace # Hostname= in a config file with Hostname= givenhostname and add an array in 4 scripts. array is the name of given database. then it will restart the agent to apply the changes.
when i run this command:
ansible-playbook -i /Ansible/inventory/hostfile /Ansible/provision/nconf.yml --tags set_config_st --extra-vars "hname=fazi dbname=fazidb"
i get this error:
first argument must be string or compiled pattern
i searched a bit but couldn't find the reason. what should i do?
The problem is in this line:
regexp: #\s+Hostname\=
You have to quote the regex because YAML comments start with #, so everything after the # will be ignored by ansible and that is why the error message occures.
So the correct line should be:
regexp: '#\s+Hostname\='
or
regexp: "#\s+Hostname\="
I think the problem is with indention. Please try as below.
- hosts: standby
remote_user: root
tasks:
- name: replace hostname in config
replace:
path: /opt/agentd.conf
regexp: #\s+Hostname\=
replace: Hostname={{hname}}
backup: yes
- name: add database array in files
lineinfile:
path: /opt/zabbix_agent/share/scripts/{{ item }}
line: 'DBNAME_ARRAY=( {{dbname}} )'
insertafter: DB2PATH=/home/db2inst1/sqllib/bin/db2
backup: yes
with_items:
- Connections
- HadrAndLog
- Memory
- Regular
- name: restart service
shell: /etc/init.d/agent restart
register: command_output
become: yes
become_user: root
tags: restart
- debug: msg="{{command_output.stdout_lines}}"
tags: set_config_st

When conditional from ssh command line in Ansible Role

I am new to ansible so any help would be appreciated.
I need to check if my remote Centos Servers have a writable /boot before I try and push VMware tools to them . Install will fail if it's read-only . How do I add another WHEN for this raw Linux command? I know if have to use register or standard out, but I cannot find examples to guide me .
RAW Linux Would be >
mount | grep boot
And I need to catch rw, the target must not be ro like in this example
>
/dev/sda1 on /boot type ext4 (ro,relatime,data=ordered)
I tried adding a task under the block like in the ansible documentation.
- name: Catch Targets with read only boot
tasks:
- command: mount | grep boot
register: boot_mode
- shell: echo "motd contains the word hi"
when: boot_mode.stdout.find('ro') != -1
---
- name: Wrapper for conditional tasks
block:
- name: Copy Files from Mirror to Remote Guest
get_url:
url: "{{ item }}"
dest: /tmp
owner: root
group: root
with_items:
- http://mirror.compuscan.co.za/repo/vmwaretools65u2/CentOS7/VMwareTools-10.3.5-10430147.tar.gz
- name: UnTAR the installer
unarchive:
src: /tmp/VMwareTools-10.3.5-10430147.tar.gz
dest: /tmp
remote_src: yes
- name: Run the PL install
become: yes
command: /tmp/vmware-tools-distrib/vmware-install.pl -d
- name: Perform Clean Up
file:
state: absent
path: "{{ item }}"
with_items:
- /tmp/vmware-tools-distrib/
- /tmp/VMwareTools-10.3.5-10430147.tar.gz
- name: Report on success or failure
service:
name: vmware-tools
state: started
enabled: yes
when: ansible_distribution == 'CentOS' and ansible_distribution_major_version == '7'
ignore_errors: yes
I want the role/playbook to ignore Targets in read-only /boot mode.
Put stat task in front of the block
- stat:
path: /boot
register: boot_mode
Then add the condition to execute the block if /boot is writeable
when:
- boot_mode.stat.writeable
- ansible_distribution == 'CentOS'
- ansible_distribution_major_version == '7'

Ansible task for checking that a host is really offline after shutdown

I am using the following Ansible playbook to shut down a list of remote Ubuntu hosts all at once:
- hosts: my_hosts
become: yes
remote_user: my_user
tasks:
- name: Confirm shutdown
pause:
prompt: >-
Do you really want to shutdown machine(s) "{{play_hosts}}"? Press
Enter to continue or Ctrl+C, then A, then Enter to abort ...
- name: Cancel existing shutdown calls
command: /sbin/shutdown -c
ignore_errors: yes
- name: Shutdown machine
command: /sbin/shutdown -h now
Two questions on this:
Is there any module available which can handle the shutdown in a more elegant way than having to run two custom commands?
Is there any way to check that the machines are really down? Or is it an anti-pattern to check this from the same playbook?
I tried something with the net_ping module but I am not sure if this is its real purpose:
- name: Check that machine is down
become: no
net_ping:
dest: "{{ ansible_host }}"
count: 5
state: absent
This, however, fails with
FAILED! => {"changed": false, "msg": "invalid connection specified, expected connection=local, got ssh"}
In more restricted environments, where ping messages are blocked you can listen on ssh port until it goes down. In my case I have set timeout to 60 seconds.
- name: Save target host IP
set_fact:
target_host: "{{ ansible_host }}"
- name: wait for ssh to stop
wait_for: "port=22 host={{ target_host }} delay=10 state=stopped timeout=60"
delegate_to: 127.0.0.1
There is no shutdown module. You can use single fire-and-forget call:
- name: Shutdown server
become: yes
shell: sleep 2 && /sbin/shutdown -c && /sbin/shutdown -h now
async: 1
poll: 0
As for net_ping, it is for network appliances such as switches and routers. If you rely on ICMP messages to test shutdown process, you can use something like this:
- name: Store actual host to be used with local_action
set_fact:
original_host: "{{ ansible_host }}"
- name: Wait for ping loss
local_action: shell ping -q -c 1 -W 1 {{ original_host }}
register: res
retries: 5
until: ('100.0% packet loss' in res.stdout)
failed_when: ('100.0% packet loss' not in res.stdout)
changed_when: no
This will wait for 100% packet loss or fail after 5 retries.
Here you want to use local_action because otherwise commands are executed on remote host (which is supposed to be down).
And you want to use trick to store ansible_host into temp fact, because ansible_host is replaced with 127.0.0.1 when delegated to local host.

how to use synchronize module to copy file to many servers

I tried to find an example where I can pull a file from a serverA to a group of servers.
mygroup consists of 10 servers. need to copy that file over to those 10 servers.
here is what I have but its not working exactly. I can do one to one copy no problem without the handlers part.
- hosts: serverA
tasks:
- name: Transfer file from serverA to mygroup
synchronize:
src: /tmp/file.txt
dest: /tmp/file.txt
mode: pull
handlers:
- name: to many servers
delegate_to: $item
with_items: ${groups.mygroup}
You should carefully read the documentation about how ansible works (what is host pattern, what is strategy, what is handler...).
Here's the answer to your question:
---
# Push mode (connect to xenial1 and rsync-push to other hosts)
- hosts: xenial-group:!xenial1
gather_facts: no
tasks:
- synchronize:
src: /tmp/hello.txt
dest: /tmp/hello.txt
delegate_to: xenial1
# Pull mode (connect to other hosts and rsync-pull from xenial1)
- hosts: xenial1
gather_facts: no
tasks:
- synchronize:
src: /tmp/hello.txt
dest: /tmp/hello.txt
mode: pull
delegate_to: "{{ item }}"
with_inventory_hostnames: xenial-group:!xenial1
Inventory:
[xenial-group]
xenial1 ansible_ssh_host=192.168.168.186 ansible_user=ubuntu ansible_ssh_extra_args='-o ForwardAgent=yes'
xenial2 ansible_ssh_host=192.168.168.187 ansible_user=ubuntu ansible_ssh_extra_args='-o ForwardAgent=yes'
xenial3 ansible_ssh_host=192.168.168.188 ansible_user=ubuntu ansible_ssh_extra_args='-o ForwardAgent=yes'
Keep in mind that synchronize is a wrapper for rsync, so for this setup to work, there must be ssh-connectivity between target hosts (usually you have ssh connection between control-host and target hosts). I use agent forwarding for this.

Resources