Auditbeat not picking up authentication events in CentOs 7 - elasticsearch

I am trying to ship the authentication related of my CentOS 7 to Elasticsearch. Strangely I am not getting any authentication events.
When I ran the debug command auditbeat -c auditbeat.conf -d -e "*" , I found something like below:
{
"#timestamp": "2019-01-15T11:54:37.246Z",
"#metadata": {
"beat": "auditbeat",
"type": "doc",
"version": "6.4.0"
},
"error": {
"message": "failed to set audit PID. An audit process is already running (PID 68504)"
},
"beat": {
"name": "env-cs-westus-devtest-66-csos-logs-es-master-0",
"hostname": "env-cs-westus-devtest-66-csos-logs-es-master-0",
"version": "6.4.0"
},
"host": {
"name": "env-cs-westus-devtest-66-csos-logs-es-master-0"
},
"event": {
"module": "auditd"
}
}
Also there was an error line like below:
Failure receiving audit events {"error": "failed to set audit PID. An audit process is already running (PID 68504)"}
Machine details
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
Audibeat Configuration File
#================================ General ======================================
fields_under_root: False
queue:
mem:
events: 4096
flush:
min_events: 2048
timeout: 1s
max_procs: 1
max_start_delay: 10s
#================================= Paths ======================================
path:
home: "/usr/share/auditbeat"
config: "/etc/auditbeat"
data: "/var/lib/auditbeat"
logs: "/var/log/auditbeat/auditbeat.log"
#============================ Config Reloading ================================
config:
modules:
path: ${path.config}/conf.d/*.yml
reload:
period: 10s
enabled: False
#========================== Modules configuration =============================
auditbeat.modules:
#----------------------------- Auditd module -----------------------------------
- module: auditd
resolve_ids: True
failure_mode: silent
backlog_limit: 8196
rate_limit: 0
include_raw_message: True
include_warnings: True
audit_rules: |
-w /etc/group -p wa -k identity
-w /etc/passwd -p wa -k identity
-w /etc/gshadow -p wa -k identity
-w /etc/shadow -p wa -k identity
-w /etc/security/opasswd -p wa -k identity
-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access
-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access
-a always,exit -F dir=/home -F uid=0 -F auid>=1000 -F auid!=4294967295 -C auid!=obj_uid -F key=power-abuse
-a always,exit -F arch=b64 -S setuid -F a0=0 -F exe=/usr/bin/su -F key=elevated-privs
-a always,exit -F arch=b32 -S setuid -F a0=0 -F exe=/usr/bin/su -F key=elevated-privs
-a always,exit -F arch=b64 -S setresuid -F a0=0 -F exe=/usr/bin/sudo -F key=elevated-privs
-a always,exit -F arch=b32 -S setresuid -F a0=0 -F exe=/usr/bin/sudo -F key=elevated-privs
-a always,exit -F arch=b64 -S execve -C uid!=euid -F euid=0 -F key=elevated-privs
-a always,exit -F arch=b32 -S execve -C uid!=euid -F euid=0 -F key=elevated-privs
#----------------------------- File Integrity module -----------------------------------
- module: file_integrity
paths:
- /bin
- /usr/bin
- /sbin
- /usr/sbin
- /etc
- /home/jenkins
exclude_files:
- (?i)\.sw[nop]$
- ~$
- /\.git($|/)
scan_at_start: True
scan_rate_per_sec: 50 MiB
max_file_size: 100 MiB
hash_types: [sha1]
recursive: False
#================================ Outputs ======================================
#-------------------------- Elasticsearch output -------------------------------
output.elasticsearch:
enabled: True
hosts:
- x.x.x:9200
compression_level: 0
protocol: "http"
worker: 1
bulk_max_size: 50
timeout: 90
#================================ Logging ======================================
logging:
level: "info"
selectors: ["*"]
to_syslog: False
to_eventlog: False
metrics:
enabled: True
period: 30s
to_files: True
files:
path: /var/log/auditbeat
name: "auditbeat"
rotateeverybytes: 10485760
keepfiles: 7
permissions: 0600
json: False
Version of Auditbeat
auditbeat version 6.4.0 (amd64), libbeat 6.4.0
Have anyone faced a similar issue and got a resolution?.
Note: this configuration for auditbeat successfully captures authentication events for Ubuntu

So I posted the same in the Elastic beats forum and got a solution. You can find the same here
As per their suggestion, turning off the auditd service would allow Audit events to be captured by Audibeat. I tried the same and it worked for me. But I am not sure of the implications of turning the auditd off. So I might switch to a Filebeat based solution.

Related

unable to execute a bash script in k8s cronjob pod's container

Team,
/bin/bash: line 5: ./repo/clone.sh: No such file or directory
cannot run above file but I can cat it well. I tried my best and still trying to find but no luck so far..
my requirement is to mount bash script from config map to a directory inside container and run it to clone a repo but am getting below message.
cron job
spec:
concurrencyPolicy: Allow
jobTemplate:
metadata:
spec:
template:
metadata:
spec:
containers:
- args:
- -c
- |
set -x
pwd && ls
ls -ltr /
cat /repo/clone.sh
./repo/clone.sh
pwd
command:
- /bin/bash
envFrom:
- configMapRef:
name: sonarscanner-configmap
image: artifactory.build.team.com/product-containers/user/sonarqube-scanner:4.7.0.2747
imagePullPolicy: IfNotPresent
name: sonarqube-sonarscanner
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /repo
name: repo-checkout
dnsPolicy: ClusterFirst
initContainers:
- args:
- -c
- cd /
command:
- /bin/sh
image: busybox
imagePullPolicy: IfNotPresent
name: clone-repo
securityContext:
privileged: true
volumeMounts:
- mountPath: /repo
name: repo-checkout
readOnly: true
restartPolicy: OnFailure
securityContext:
fsGroup: 0
volumes:
- configMap:
defaultMode: 420
name: product-configmap
name: repo-checkout
schedule: '*/1 * * * *'
ConfigMap
kind: ConfigMap
metadata:
apiVersion: v1
data:
clone.sh: |-
#!bin/bash
set -xe
apk add git curl
#Containers that fail to resolve repo url can use below step.
repo_url=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Name | cut -d: -f2)
repo_ip=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Address | cut -d: -f2)
if grep ${repo_url} /etc/hosts; then
echo "git dns entry exists locally"
else
echo "Adding dns entry for git inside container"
echo ${repo_ip} ${repo_url} >> /etc/hosts
fi
cd / && cat /etc/hosts && pwd
git clone "https://$RU:$RT#${CODE_REPO_URL}/r/a/${CODE_REPO_NAME}" && \
(cd "${CODE_REPO_NAME}" && mkdir -p .git/hooks && \
curl -Lo `git rev-parse --git-dir`/hooks/commit-msg \
https://$RU:$RT#${CODE_REPO_URL}/r/tools/hooks/commit-msg; \
chmod +x `git rev-parse --git-dir`/hooks/commit-msg)
cd ${CODE_REPO_NAME}
pwd
output pod describe
Warning FailedCreatePodSandBox 1s kubelet, node1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "sonarqube-cronjob-1670256720-fwv27": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:303: getting the final child's pid from pipe caused \"EOF\"": unknown
pod logs
+ pwd
+ ls
/usr/src
+ ls -ltr /repo/clone.sh
lrwxrwxrwx 1 root root 15 Dec 5 16:26 /repo/clone.sh -> ..data/clone.sh
+ ls -ltr
total 60
.
drwxr-xr-x 2 root root 4096 Aug 9 08:58 sbin
drwx------ 2 root root 4096 Aug 9 08:58 root
drwxr-xr-x 2 root root 4096 Aug 9 08:58 mnt
drwxr-xr-x 5 root root 4096 Aug 9 08:58 media
drwxrwsrwx 3 root root 4096 Dec 5 16:12 repo <<<<< MY MOUNTED DIR
.
+ cat /repo/clone.sh
#!bin/bash
set -xe
apk add git curl
#Containers that fail to resolve repo url can use below step.
repo_url=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Name | cut -d: -f2)
repo_ip=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Address | cut -d: -f2)
if grep ${repo_url} /etc/hosts; then
echo "git dns entry exists locally"
else
echo "Adding dns entry for git inside container"
echo ${repo_ip} ${repo_url} >> /etc/hosts
fi
cd / && cat /etc/hosts && pwd
git clone "https://$RU:$RT#${CODE_REPO_URL}/r/a/${CODE_REPO_NAME}" && \
(cd "${CODE_REPO_NAME}" && mkdir -p .git/hooks && \
curl -Lo `git rev-parse --git-dir`/hooks/commit-msg \
https://$RU:$RT#${CODE_REPO_URL}/r/tools/hooks/commit-msg; \
chmod +x `git rev-parse --git-dir`/hooks/commit-msg)
cd code_dir
+ ./repo/clone.sh
/bin/bash: line 5: ./repo/clone.sh: No such file or directory
+ pwd
pwd/usr/src
Assuming the working directory is different thant /:
If you want to source your script in the current process of bash (shorthand .) you have to add a space between the dot and the path:
. /repo/clone.sh
If you want to execute it in a child process, remove the dot:
/repo/clone.sh

ansible synchronize module fails to get directory from remote to local - failed: No such file or directory

I wish to copy /web/playbooks/automation/misc/filecopyprod from mysourceuser#mysourcehost to destination mydestuser#mydesthost under the below location /web/playbooks/automation/misc/filecopy/tmpfiles/500/
Evident that both the source and destinations are present and have good permissions.
[mydestuser#mydesthost ~]$ ssh mysourceuser#mysourcehost ls -ld '/web/playbooks/automation/misc/filecopyprod'
##################################################################
# *** This Server is using Centrify *** #
# *** Remember to use your Active Directory account *** #
# *** password when logging in *** #
##################################################################
drwxrwxr-x 3 mysourceuser mysourceuser 209 Sep 26 14:58 /web/playbooks/automation/misc/filecopyprod
[mydestuser#mydesthost ~]$ ls -ld /web/playbooks/automation/misc/filecopy/tmpfiles/500/
drwxr-xr-x 2 mydestuser aces 6 Sep 26 14:13 /web/playbooks/automation/misc/filecopy/tmpfiles/500/
Here is my playbook that runs on the mydesthost and gets me files & folders from a remote server mysourceuser#mysourcehost to local server mydestuser#mydesthost
- name: Copying from "{{ inventory_hostname }}" to this ansible server.
tags: validate
synchronize:
src: "'{{ item }}'"
dest: "{{ playbook_dir }}/tmpfiles/{{ Latest_Build_Number }}/"
mode: pull
copy_links: yes
with_items:
- "{{ source_file_new.splitlines() }}"
To run the above playbook:
ansible-playbook /web/playbooks/automation/misc/filecopy/copyfiles.yml -e "source_file_new='$source_file_new'" -e "Latest_Build_Number='500'"
Output of my run:
TASK [Copying from "mysourcehost" to this ansible server.] **********************
task path: /web/playbooks/automation/misc/filecopy/copyfiles.yml:218
Monday 26 September 2022 14:13:02 -0500 (0:00:00.047) 0:00:03.084 ******
redirecting (type: action) ansible.builtin.synchronize to ansible.posix.synchronize
redirecting (type: action) ansible.builtin.synchronize to ansible.posix.synchronize
<mysourcehost> ESTABLISH LOCAL CONNECTION FOR USER: mydestuser
<mysourcehost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/mydestuser/.ansible/tmp/ansible-local-20463qmilic81 `"&& mkdir "` echo /home/mydestuser/.ansible/tmp/ansible-local-20463qmilic81/ansible-tmp-1664219583.0005133-20679-105296975361597 `" && echo ansible-tmp-1664219583.0005133-20679-105296975361597="` echo /home/mydestuser/.ansible/tmp/ansible-local-20463qmilic81/ansible-tmp-1664219583.0005133-20679-105296975361597 `" ) && sleep 0'
Using module file /home/mydestuser/.ansible/collections/ansible_collections/ansible/posix/plugins/modules/synchronize.py
<mysourcehost> PUT /home/mydestuser/.ansible/tmp/ansible-local-20463qmilic81/tmpxhpyaf0m TO /home/mydestuser/.ansible/tmp/ansible-local-20463qmilic81/ansible-tmp-1664219583.0005133-20679-105296975361597/AnsiballZ_synchronize.py
<mysourcehost> EXEC /bin/sh -c 'chmod u+x /home/mydestuser/.ansible/tmp/ansible-local-20463qmilic81/ansible-tmp-1664219583.0005133-20679-105296975361597/ /home/mydestuser/.ansible/tmp/ansible-local-20463qmilic81/ansible-tmp-1664219583.0005133-20679-105296975361597/AnsiballZ_synchronize.py && sleep 0'
<mysourcehost> EXEC /bin/sh -c '/usr/local/bin/python3.8 /home/mydestuser/.ansible/tmp/ansible-local-20463qmilic81/ansible-tmp-1664219583.0005133-20679-105296975361597/AnsiballZ_synchronize.py && sleep 0'
<mysourcehost> EXEC /bin/sh -c 'rm -f -r /home/mydestuser/.ansible/tmp/ansible-local-20463qmilic81/ansible-tmp-1664219583.0005133-20679-105296975361597/ > /dev/null 2>&1 && sleep 0'
failed: [mysourcehost] (item=/web/playbooks/automation/misc/filecopyprod) => {
"ansible_loop_var": "item",
"changed": false,
"cmd": "/bin/rsync --delay-updates -F --compress --copy-links --archive --rsh=/usr/share/centrifydc/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null --out-format=<<CHANGED>>%i %n%L mysourceuser#mysourcehost:'/web/playbooks/automation/misc/filecopyprod' /web/playbooks/automation/misc/filecopy/tmpfiles/500/",
"invocation": {
"module_args": {
"_local_rsync_password": null,
"_local_rsync_path": "rsync",
"_substitute_controller": false,
"archive": true,
"checksum": false,
"compress": true,
"copy_links": true,
"delete": false,
"dest": "/web/playbooks/automation/misc/filecopy/tmpfiles/500/",
"dest_port": null,
"dirs": false,
"existing_only": false,
"group": null,
"link_dest": null,
"links": null,
"mode": "pull",
"owner": null,
"partial": false,
"perms": null,
"private_key": null,
"recursive": null,
"rsync_opts": [],
"rsync_path": null,
"rsync_timeout": 0,
"set_remote_user": true,
"src": "mysourceuser#mysourcehost:'/web/playbooks/automation/misc/filecopyprod'",
"ssh_args": null,
"ssh_connection_multiplexing": false,
"times": null,
"verify_host": false
}
},
"item": "/web/playbooks/automation/misc/filecopyprod",
"msg": "Warning: Permanently added 'mysourcehost' (ED25519) to the list of known hosts.\r\n\nThis system is for the use by authorized users only. All data contained\non all systems is owned by the company and may be monitored, intercepted,\nrecorded, read, copied, or captured in any manner and disclosed in any\nmanner, by authorized company personnel. Users (authorized or unauthorized)\nhave no explicit or implicit expectation of privacy. Unauthorized or improper\nuse of this system may result in administrative, disciplinary action, civil\nand criminal penalties. Use of this system by any user, authorized or\nunauthorized, constitutes express consent to this monitoring, interception,\nrecording, reading, copying, or capturing and disclosure.\n\nIF YOU DO NOT CONSENT, LOG OFF NOW.\n\n##################################################################\n# *** This Server is using Centrify *** #\n# *** Remember to use your Active Directory account *** #\n# *** password when logging in *** #\n##################################################################\n\nrsync: change_dir \"/home/mysourceuser//'/web/playbooks/automation/misc\" failed: No such file or directory (2)\nrsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1658) [Receiver=3.1.2]\nrsync: [Receiver] write error: Broken pipe (32)\n",
"rc": 23
}
From the output i got the concerned rsync command and tried to run it manually on my playbook host mydestuser#mydesthost and i get similar error:
"/bin/rsync --delay-updates -F --compress --copy-links --archive --rsh=/usr/share/centrifydc/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null --out-format=<<CHANGED>>%i %n%L mysourceuser#mysourcehost:'/web/playbooks/automation/misc/filecopyprod' /web/playbooks/automation/misc/filecopy/tmpfiles/500/"
Output:
bash: /bin/rsync --delay-updates -F --compress --copy-links --archive --rsh=/usr/share/centrifydc/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null --out-format=<<CHANGED>>%i %n%L mysourceuser#mysourcehost:'/web/playbooks/automation/misc/filecopyprod' /web/playbooks/automation/misc/filecopy/tmpfiles/500/: No such file or directory
Upon suggestion from StackOverflow I quoted --out-format but I continue to get the same error. See snapshot of the error in the output below:
"/bin/rsync --delay-updates -F --compress --copy-links --archive --rsh=/usr/share/centrifydc/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null --out-format='<<CHANGED>>%i %n%L' mysourceuser#mysourcehost:'/tmp/myfolder' /tmp/myfolder1"
Can you please suggest?
Your full command was this
/bin/rsync --delay-updates -F --compress --copy-links --archive --rsh=/usr/share/centrifydc/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null --out-format=<<CHANGED>>%i %n%L mysourceuser#mysourcehost:'/web/playbooks/automation/misc/filecopyprod' /web/playbooks/automation/misc/filecopy/tmpfiles/500/
You've omitted to quote arguments containing spaces, so when the shell parses the line it splits at those spaces, leading to syntax errors when rsync tries to understand the line.
Fix the --rsh parameter, which contains spaces, by changing it to this:
--rsh='/usr/share/centrifydc/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
Fix the --out-format parameter, which contains whitespace and shell special characters by changing it to this:
--out-format='<<CHANGED>>%i %n%L'
In a later example, you've put double quotes around the entire command, so the shell is trying to execute the entire command as a single entity. For example, in the first line the shell splits the line at spaces then executes the command echo with a parameter hello. In second line the shell sees the quoted string and treats it as a single entity; it then tries to execute the command called echo hello - not a command echo with a parameter hello but a command with a literal space character in the middle:
echo hello # → hello
"echo hello" # → -bash: echo hello: command not found
Rule: if a command or parameter contains a space or other special shell character and it's to be considered as a single item, it must be quoted.
The playbook works for older versions of rsync.
With the latest version it started to fail as reported here.
Changed
synchronize:
src: "'{{ item }}'"
to
synchronize:
src: "{{ item }}"
and the error was gone issued resolved with the latest rsync.

I'm trying to unseal vault using ansible. But I'm getting connection refused error

It worked a few days back, i even checked similar problems like here
I tried to add the environment variables and everything, my hcl file aslo is not a problem as far as i know
hcl file is
storage "file" {
path = "/home/***/vault/"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 1
}
my unseal.yml looks like this
---
- name: Removing login and putting to another file
shell: sed -n '7p' keys.txt > login.txt
- name: Remove all lines other than the keys
shell: sed '6,$d' keys.txt > temp.txt
- name: Extracting the keys
shell: cut -c15- temp.txt > unseal_keys.txt
- name: Deleting unnecessary files
shell: rm temp.txt
- name: Unsealing the vault
environment:
VAULT_ADDR: http://127.0.0.1:8200
shell: vault operator unseal $(awk 'NR==1' unseal_keys.txt)
- name: Unsealing the vault
environment:
VAULT_ADDR: http://127.0.0.1:8200
shell: vault operator unseal $(awk 'NR==2' unseal_keys.txt)
- name: Unsealing the vault
environment:
VAULT_ADDR: http://127.0.0.1:8200
shell: vault operator unseal $(awk 'NR==3' unseal_keys.txt)
register: check
- debug: var=check.stdout_lines
- name: Login
environment:
VAULT_ADDR: http://127.0.0.1:8200
shell: vault login $(sed 's/Initial Root Token://; s/ //' login.txt)
register: checkLogin
- debug: var=checkLogin.stdout_lines
My start-server.yml looks like this
---
#- name: Disable mlock
# shell: sudo setcap cap_ipc_lock=+ep $(readlink -f $(which vault))
# shell: LimitMEMLOCK=infinity
- name: Start vault service
systemd:
state: started
name: vault
daemon_reload: yes
environment:
VAULT_ADDR: http://127.0.0.1:8200
become: true
- pause:
seconds: 15
This the error shown.
fatal: [europa]: FAILED! => {"changed": true, "cmd": "vault operator unseal $(awk 'NR==1' unseal_keys.txt)", "delta": "0:00:00.049258", "end": "2019-09-17 12:25:48.987789", "msg": "non-zero return code", "rc": 2, "start": "2019-09-17 12:25:48.938531", "stderr": "Error unsealing: Put http://127.0.0.1:8200/v1/sys/unseal: dial tcp 127.0.0.1:8200: connect: connection refused", "stderr_lines": ["Error unsealing: Put http://127.0.0.1:8200/v1/sys/unseal: dial tcp 127.0.0.1:8200: connect: connection refused"], "stdout": "", "stdout_lines": []}
This is the main error
"Error unsealing: Put http://127.0.0.1:8200/v1/sys/unseal: dial tcp 127.0.0.1:8200: connect: connection refused", "stderr_lines": ["Error unsealing: Put http://127.0.0.1:8200/v1/sys/unseal: dial tcp 127.0.0.1:8200: connect: connection refused"
"Error unsealing: Put http://127.0.0.1:8200/v1/sys/unseal: dial tcp 127.0.0.1:8200: connect: connection refused"
As it is showing connection refused, most probably your vault service is not running.
Other thing which I can suggest is that you can make a script named unseal_vault.sh and can use that script to unseal your vault instead of repeating same tasks in your playbook.
Below is a script which I use in my setup to unseal vault.
#!/bin/bash
# Assumptions: vault is already initialized
# Fetching first three keys to unseal the vault
KEY_1=$(cat keys.log | grep 'Unseal Key 1' | awk '{print $4}')
KEY_2=$(cat keys.log | grep 'Unseal Key 2' | awk '{print $4}')
KEY_3=$(cat keys.log | grep 'Unseal Key 3' | awk '{print $4}')
# Unseal using first key
curl --silent -X PUT \
http://192.*.*.*:8200/v1/sys/unseal \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-d '{
"key": "'$KEY_1'"
}'
# Unseal using second key
curl --silent -X PUT \
http://192.*.*.*:8200/v1/sys/unseal \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-d '{
"key": "'$KEY_2'"
}'
# Unseal using third key
curl --silent -X PUT \
http://192.*.*.*:8200/v1/sys/unseal \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-d '{
"key": "'$KEY_3'"
}'
And you can run this script using a single task in ansible.

Ansible fortios_config backups not working

I use Fortinet for firewall automation, but i get the error "Error reading running config" . I already followed this website: https://github.com/ansible/ansible/issues/33392
But do not find any solution. Please tell me what am I doing wrong ?
Ansible version: 2.7.0
Python version: 2.7.5
Fortinet: 60E
FortiOS version: 6.0.2
Here is what I am trying:
FortiOS.yml playbook:
---
- name: FortiOS Firewall Settings
hosts: fortiFW
connection: local
vars_files:
- /etc/ansible/vars/FortiOS_Settings_vars.yml
tasks:
- name: Backup current config
fortios_config:
host: 192.168.1.99
username: admin
password: Password#123
backup: yes
backup_path: /etc/ansible/forti_backup
Here is what I get as error:
ok: [192.168.1.99] META: ran handlers Read vars_file
'/etc/ansible/vars/FortiOS_Settings_vars.yml'
TASK [Backup current config]
**************************************************************************************************************************************************************************************************************** task path: /etc/ansible/FortiOS_Settings_test.yml:8 <192.168.1.99>
ESTABLISH LOCAL CONNECTION FOR USER: root <192.168.1.99> EXEC /bin/sh
-c 'echo ~root && sleep 0' <192.168.1.99> EXEC /bin/sh -c '( umask 77 && mkdir -p "echo
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226" && echo
ansible-tmp-1539674386.05-16470854685226="echo
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226" ) &&
sleep 0' Using module file
/usr/lib/python2.7/site-packages/ansible/modules/network/fortios/fortios_config.py
<192.168.1.99> PUT
/root/.ansible/tmp/ansible-local-6154Uq5Dmw/tmpt6JukB TO
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226/AnsiballZ_fortios_config.py
<192.168.1.99> EXEC /bin/sh -c 'chmod u+x
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226/
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226/AnsiballZ_fortios_config.py
&& sleep 0' <192.168.1.99> EXEC /bin/sh -c '/usr/bin/python
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226/AnsiballZ_fortios_config.py
&& sleep 0' <192.168.1.99> EXEC /bin/sh -c 'rm -f -r
/root/.ansible/tmp/ansible-tmp-1539674386.05-16470854685226/ >
/dev/null 2>&1 && sleep 0' The full traceback is: WARNING: The below
traceback may not be related to the actual failure. File
"/tmp/ansible_fortios_config_payload_b6IQmy/main.py", line 132, in
main
f.load_config(path=module.params['filter']) File "/usr/lib/python2.7/site-packages/pyFG/fortios.py", line 212, in
load_config
config_text = self.execute_command(command) File "/usr/lib/python2.7/site-packages/pyFG/fortios.py", line 154, in
execute_command
output = output + self._read_wrapper(o) File "/usr/lib/python2.7/site-packages/pyFG/fortios.py", line 120, in
_read_wrapper
return py23_compat.text_type(data)
fatal: [192.168.1.99]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"backup": true,
"backup_filename": null,
"backup_path": "/etc/ansible/forti_backup",
"config_file": null,
"file_mode": false,
"filter": "",
"host": "192.168.1.99",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"src": null,
"timeout": 60,
"username": "admin",
"vdom": null
}
},
"msg": "Error reading running config" }
When working with this module, I had the same issue. I looked into the source code of the module, and found that this error occurs when filter is set to "" -> empty string. You can get facts about the device when changing filter to something like "firewall address". But then you will only get back the options from exactly that, like if you would've typed "show firewall address" on the CLI of the device.
I'm currently working on a solution to use Ansible for FortiGate automation, but it's not looking good. E.g. FortiGates additionally do not support Netconf, so you can't use Netconf to send commands to the device.
So therefore, you're not doing anything wrong, but the modules is either not optimized, or I guessed that maybe the full-configuration is too big to be read by the module, so that you have to use the filter option to shrink it.

Homestead pass parameters to after.sh for xdebug autoconfigure

I put the following to after.sh to autoconfigure the Xdebug form project:
#!/bin/sh
echo "Configuring Xdebug"
ip=$(netstat -rn | grep "^0.0.0.0 " | cut -d " " -f10)
xdebug_config="/etc/php/$(php -v | head -n 1 | awk '{print $2}'|cut -c 1-3)/mods-available/xdebug.ini"
echo "IP for the xdebug to connect back: ${ip}"
echo "Xdebug Configuration path: ${xdebug_config}"
echo "Port for the Xdebug to connect back: ${XDEBUG_PORT}"
echo "Optimize for ${IDE} ide"
if [ $IDE=='atom' ]; then
echo "Configuring xdebug for ATOM ide"
if [ -z ${xdebug_config} ]; then
sudo touch ${xdebug_config}
fi
sudo cat <<EOL >${xdebug_config}
zend_extension = xdebug.so
xdebug.remote_enable = 1
xdebug.remote_host=${ip}
xdebug.remote_port = ${XDEBUG_PORT}
xdebug.max_nesting_level = 1000
xdebug.remote_handler=dbgp
xdebug.remote_mode=req
xdebug.remote_autostart=true
xdebug.remote_log=xdebug.log
EOL
fi
Also I have the following settings to Homestead.yaml:
ip: 192.168.10.10
memory: 2048
cpus: 1
provider: virtualbox
authorize: ~/.ssh/id_rsa.pub
timeout: 120
keys:
- ~/.ssh/id_rsa
folders:
-
map: /home/pcmagas/Kwdikas/php/apps/ellakcy_member_app/
to: /home/vagrant/code
sites:
-
map: homestead.test
to: /home/vagrant/code/web
type: symfony
databases:
- homestead
- homestead-test
variables:
- key: database_host
value: 127.0.0.1
- key: database_port
value: 3306
- key: database_name
value: homestead
- key: database_user
value: homestead
- key: database_password
value: secret
- key: smtp_host
value: localhost
- key: smtp_port
value: 1025
- key: smtp_user
value: no-reply#example.com
- key: IDE
value: atom
- key: XDEBUG_PORT
value: 9091
name: ellakcy-member-app
hostname: ellakcy-member-app
But for some reason it cannot read the values from enviromental variables defined in Homestead.yml as seen in the following output:
ellakcy-member-app: IP for the xdebug to connect back: 10.0.2.2
ellakcy-member-app: Xdebug Configuration path: /etc/php/7.2/mods-available/xdebug.ini
ellakcy-member-app: Port for the Xdebug to connect back:
ellakcy-member-app: Optimize for ide
ellakcy-member-app: Configuring xdebug for ATOM ide
As you can see it fails to read values from the IDE and XDEBUG_PORT do you knwo why and how I can fix that?
You can put a parse_yaml.sh:
#!/bin/sh
parse_yaml() {
local prefix=$2
local s='[[:space:]]*' w='[a-zA-Z0-9_]*' fs=$(echo #|tr # '\034')
sed -ne "s|^\($s\)\($w\)$s:$s\"\(.*\)\"$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p" $1 |
awk -F$fs '{
indent = length($1)/2;
vname[indent] = $2;
for (i in vname) {if (i > indent) {delete vname[i]}}
if (length($3) > 0) {
vn=""; for (i=0; i<indent; i++) {vn=(vn)(vname[i])("_")}
printf("%s%s%s=\"%s\"\n", "'$prefix'",vn, $2, $3);
}
}'
}
And into after.sh
#!/bin/sh
# include parse_yaml function
. parse_yaml.sh
# read yaml file
eval $(parse_yaml zconfig.yml "config_")
# access yaml content
echo $config_development_database
thanks -> https://gist.github.com/pkuczynski/8665367
In my case I tried the approach of having a file named xdebug.conf where I place anything that the default xdebug.conf needs to get rewritten:
zend_extension = xdebug.so
xdebug.remote_enable = 1
xdebug.remote_host = $ip
xdebug.remote_port = 9091
xdebug.max_nesting_level = 1000
xdebug.remote_handler=dbgp
xdebug.remote_mode=req
xdebug.remote_autostart=true
xdebug.remote_log=xdebug.log
The $ip indicates the auto-replaced value with the correct ip in order for the xdebug to get connected into. The script that actually updates the xdebug configuration with the appropriate values is this one in my after.sh
#!/bin/sh
code_path="/home/vagrant/code"
cd $code_path
# Some other bootstrapping
echo "Configuring Xdebug"
ip=$(netstat -rn | grep "^0.0.0.0 " | cut -d " " -f10)
xdebug_config="/etc/php/$(php -v | head -n 1 | awk '{print $2}'|cut -c 1-3)/mods-available/xdebug.ini"
echo "Xdebug config file ${xdebug_config}"
if [ -f "${code_path}/xdebug.conf" ]; then
echo "Specifying the ip with ${ip}"
sed "s/\$ip/${ip}/g" xdebug.conf > xdebug.conf.tmp
echo "Moving Into ${xdebug_config}"
cat xdebug.conf.tmp
sudo cp ./xdebug.conf.tmp ${xdebug_config}
else
echo "File not found"
fi
The last step is to .gitignore any xdebug.conf* file. So now the developer has to create his own xdebug.conf.

Resources