run filebeat as ansible - ansible

I want to run filebeats installed using ansible.
It worked fine when I executed command in terminal.
However, when I ran filebeat with ansible, the log was outputted as shown below.
ansible filebeat run role
- name: run vote history filebeat
remote_user: irteam
shell: /home1/irteam/apps/filebeat/filebeat -e -c /home1/irteam/apps/filebeat/vote_history.yml -d publish &
ansible error log
"stderr": "2019-10-18T21:37:39.793+0900\tINFO\tinstance/beat.go:606\tHome path: [/home1/irteam/apps/filebeat-7.2.1-linux-x86_64] Config path: [/home1/irteam/apps/filebeat-7.2.1-linux-x86_64] Data path: [/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/data] Logs path: [/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/logs]\n2019-10-18T21:37:39.794+0900\tINFO\tinstance/beat.go:614\tBeat ID: 4e840153-d8fd-44c4-ab32-7e2d3dd34d3f\n2019-10-18T21:37:39.794+0900\tINFO\t[seccomp]\tseccomp/seccomp.go:93\tSyscall filter could not be installed because the kernel does not support seccomp\n2019-10-18T21:37:39.794+0900\tINFO\t[beat]\tinstance/beat.go:902\tBeat info\t{\"system_info\": {\"beat\": {\"path\": {\"config\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64\", \"data\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/data\", \"home\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64\", \"logs\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/logs\"}, \"type\": \"filebeat\", \"uuid\": \"4e840153-d8fd-44c4-ab32-7e2d3dd34d3f\"}}}\n2019-10-18T21:37:39.794+0900\tINFO\t[beat]\tinstance/beat.go:911\tBuild info\t{\"system_info\": {\"build\": {\"commit\": \"dd3f47f0fb299aa5de9c5c1468faacc1b9b3c27f\", \"libbeat\": \"7.2.1\", \"time\": \"2019-07-24T17:10:04.000Z\", \"version\": \"7.2.1\"}}}\n2019-10-18T21:37:39.794+0900\tINFO\t[beat]\tinstance/beat.go:914\tGo runtime info\t{\"system_info\": {\"go\": {\"os\":\"linux\",\"arch\":\"amd64\",\"max_procs\":4,\"version\":\"go1.12.4\"}}}\n2019-10-18T21:37:39.796+0900\tINFO\t[beat]\tinstance/beat.go:918\tHost info\t{\"system_info\": {\"host\": {\"architecture\":\"x86_64\",\"boot_time\":\"2019-09-16T14:01:30+09:00\",\"containerized\":false,\"name\":\"dev-vos-api-ncl\",\"ip\":[\"127.0.0.1/8\",\"10.113.103.6/23\"],\"kernel_version\":\"3.10.0-693.2.2.el7.x86_64\",\"mac\":[\"7e:76:cd:f1:39:c6\"],\"os\":{\"family\":\"redhat\",\"platform\":\"centos\",\"name\":\"CentOS Linux\",\"version\":\"7 (Core)\",\"major\":7,\"minor\":4,\"patch\":1708,\"codename\":\"Core\"},\"timezone\":\"KST\",\"timezone_offset_sec\":32400,\"id\":\"97b3ae2e453f442cb387546f7d3d3214\"}}}\n2019-10-18T21:37:39.796+0900\tINFO\t[beat]\tinstance/beat.go:947\tProcess info\t{\"system_info\": {\"process\": {\"capabilities\": {\"inheritable\":null,\"permitted\":null,\"effective\":null,\"bounding\":[\"chown\",\"dac_override\",\"dac_read_search\",\"fowner\",\"fsetid\",\"kill\",\"setgid\",\"setuid\",\"setpcap\",\"linux_immutable\",\"net_bind_service\",\"net_broadcast\",\"net_admin\",\"net_raw\",\"ipc_lock\",\"ipc_owner\",\"sys_module\",\"sys_rawio\",\"sys_chroot\",\"sys_ptrace\",\"sys_pacct\",\"sys_admin\",\"sys_boot\",\"sys_nice\",\"sys_resource\",\"sys_time\",\"sys_tty_config\",\"mknod\",\"lease\",\"audit_write\",\"audit_control\",\"setfcap\",\"mac_override\",\"mac_admin\",\"syslog\",\"wake_alarm\",\"block_suspend\"],\"ambient\":null}, \"cwd\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64\", \"exe\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/filebeat\", \"name\": \"filebeat\", \"pid\": 115545, \"ppid\": 1, \"seccomp\": {\"mode\":\"disabled\"}, \"start_time\": \"2019-10-18T21:37:39.160+0900\"}}}\n2019-10-18T21:37:39.796+0900\tINFO\tinstance/beat.go:292\tSetup Beat: filebeat; Version: 7.2.1\n2019-10-18T21:37:39.797+0900\tINFO\t[publisher]\tpipeline/module.go:97\tBeat name: dev-vos-api-ncl\n2019-10-18T21:37:39.797+0900\tWARN\tbeater/filebeat.go:152\tFilebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.\n2019-10-18T21:37:39.797+0900\tINFO\tinstance/beat.go:421\tfilebeat start running.\n2019-10-18T21:37:39.797+0900\tINFO\t[monitoring]\tlog/log.go:118\tStarting metrics logging every 30s\n2019-10-18T21:37:39.797+0900\tINFO\tregistrar/registrar.go:145\tLoading registrar data from /home1/irteam/apps/filebeat-7.2.1-linux-x86_64/data/registry/filebeat/data.json\n2019-10-18T21:37:39.798+0900\tINFO\tregistrar/registrar.go:152\tStates Loaded from registrar: 2\n2019-10-18T21:37:39.798+0900\tWARN\tbeater/filebeat.go:368\tFilebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.\n2019-10-18T21:37:39.798+0900\tINFO\tcrawler/crawler.go:72\tLoading Inputs: 1\n2019-10-18T21:37:39.798+0900\tINFO\tlog/input.go:148\tConfigured paths: [/home1/irteam/logs/vos_api/vote_history*.log]\n2019-10-18T21:37:39.798+0900\tINFO\tinput/input.go:114\tStarting input of type: log; ID: 5211242161898657702 \n2019-10-18T21:37:39.798+0900\tINFO\tcrawler/crawler.go:106\tLoading and starting Inputs completed. Enabled inputs: 1",
"stderr_lines": [
"2019-10-18T21:37:39.793+0900\tINFO\tinstance/beat.go:606\tHome path: [/home1/irteam/apps/filebeat-7.2.1-linux-x86_64] Config path: [/home1/irteam/apps/filebeat-7.2.1-linux-x86_64] Data path: [/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/data] Logs path: [/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/logs]",
"2019-10-18T21:37:39.794+0900\tINFO\tinstance/beat.go:614\tBeat ID: 4e840153-d8fd-44c4-ab32-7e2d3dd34d3f",
"2019-10-18T21:37:39.794+0900\tINFO\t[seccomp]\tseccomp/seccomp.go:93\tSyscall filter could not be installed because the kernel does not support seccomp",
"2019-10-18T21:37:39.794+0900\tINFO\t[beat]\tinstance/beat.go:902\tBeat info\t{\"system_info\": {\"beat\": {\"path\": {\"config\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64\", \"data\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/data\", \"home\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64\", \"logs\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/logs\"}, \"type\": \"filebeat\", \"uuid\": \"4e840153-d8fd-44c4-ab32-7e2d3dd34d3f\"}}}",
"2019-10-18T21:37:39.794+0900\tINFO\t[beat]\tinstance/beat.go:911\tBuild info\t{\"system_info\": {\"build\": {\"commit\": \"dd3f47f0fb299aa5de9c5c1468faacc1b9b3c27f\", \"libbeat\": \"7.2.1\", \"time\": \"2019-07-24T17:10:04.000Z\", \"version\": \"7.2.1\"}}}",
"2019-10-18T21:37:39.794+0900\tINFO\t[beat]\tinstance/beat.go:914\tGo runtime info\t{\"system_info\": {\"go\": {\"os\":\"linux\",\"arch\":\"amd64\",\"max_procs\":4,\"version\":\"go1.12.4\"}}}",
"2019-10-18T21:37:39.796+0900\tINFO\t[beat]\tinstance/beat.go:918\tHost info\t{\"system_info\": {\"host\": {\"architecture\":\"x86_64\",\"boot_time\":\"2019-09-16T14:01:30+09:00\",\"containerized\":false,\"name\":\"dev-vos-api-ncl\",\"ip\":[\"127.0.0.1/8\",\"10.113.103.6/23\"],\"kernel_version\":\"3.10.0-693.2.2.el7.x86_64\",\"mac\":[\"7e:76:cd:f1:39:c6\"],\"os\":{\"family\":\"redhat\",\"platform\":\"centos\",\"name\":\"CentOS Linux\",\"version\":\"7 (Core)\",\"major\":7,\"minor\":4,\"patch\":1708,\"codename\":\"Core\"},\"timezone\":\"KST\",\"timezone_offset_sec\":32400,\"id\":\"97b3ae2e453f442cb387546f7d3d3214\"}}}",
"2019-10-18T21:37:39.796+0900\tINFO\t[beat]\tinstance/beat.go:947\tProcess info\t{\"system_info\": {\"process\": {\"capabilities\": {\"inheritable\":null,\"permitted\":null,\"effective\":null,\"bounding\":[\"chown\",\"dac_override\",\"dac_read_search\",\"fowner\",\"fsetid\",\"kill\",\"setgid\",\"setuid\",\"setpcap\",\"linux_immutable\",\"net_bind_service\",\"net_broadcast\",\"net_admin\",\"net_raw\",\"ipc_lock\",\"ipc_owner\",\"sys_module\",\"sys_rawio\",\"sys_chroot\",\"sys_ptrace\",\"sys_pacct\",\"sys_admin\",\"sys_boot\",\"sys_nice\",\"sys_resource\",\"sys_time\",\"sys_tty_config\",\"mknod\",\"lease\",\"audit_write\",\"audit_control\",\"setfcap\",\"mac_override\",\"mac_admin\",\"syslog\",\"wake_alarm\",\"block_suspend\"],\"ambient\":null}, \"cwd\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64\", \"exe\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/filebeat\", \"name\": \"filebeat\", \"pid\": 115545, \"ppid\": 1, \"seccomp\": {\"mode\":\"disabled\"}, \"start_time\": \"2019-10-18T21:37:39.160+0900\"}}}",
"2019-10-18T21:37:39.796+0900\tINFO\tinstance/beat.go:292\tSetup Beat: filebeat; Version: 7.2.1",
"2019-10-18T21:37:39.797+0900\tINFO\t[publisher]\tpipeline/module.go:97\tBeat name: dev-vos-api-ncl",
"2019-10-18T21:37:39.797+0900\tWARN\tbeater/filebeat.go:152\tFilebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.",
"2019-10-18T21:37:39.797+0900\tINFO\tinstance/beat.go:421\tfilebeat start running.",
"2019-10-18T21:37:39.797+0900\tINFO\t[monitoring]\tlog/log.go:118\tStarting metrics logging every 30s",
"2019-10-18T21:37:39.797+0900\tINFO\tregistrar/registrar.go:145\tLoading registrar data from /home1/irteam/apps/filebeat-7.2.1-linux-x86_64/data/registry/filebeat/data.json",
"2019-10-18T21:37:39.798+0900\tINFO\tregistrar/registrar.go:152\tStates Loaded from registrar: 2",
"2019-10-18T21:37:39.798+0900\tWARN\tbeater/filebeat.go:368\tFilebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.",
"2019-10-18T21:37:39.798+0900\tINFO\tcrawler/crawler.go:72\tLoading Inputs: 1",
"2019-10-18T21:37:39.798+0900\tINFO\tlog/input.go:148\tConfigured paths: [/home1/irteam/logs/vos_api/vote_history*.log]",
"2019-10-18T21:37:39.798+0900\tINFO\tinput/input.go:114\tStarting input of type: log; ID: 5211242161898657702 ",
"2019-10-18T21:37:39.798+0900\tINFO\tcrawler/crawler.go:106\tLoading and starting Inputs completed. Enabled inputs: 1"

It looks like the command is not returning 0 as a return code although it succeeded (with warnings).
By default, the shell module in ansible will consider that any command with a return code other than 0 is an error.
You should check your command return codes with echo $? right after executing it in your terminal. Then you can either:
fix everything so that it returns 0 all the time (unless there is really an error)
adjust failed_when on your task to include return codes considered as success (replace X with an actual return code you want to trust):
- name: run vote history filebeat
remote_user: irteam
shell: /home1/irteam/apps/filebeat/filebeat -e -c /home1/irteam/apps/filebeat/vote_history.yml -d publish &
register: filebeat_cmd
failed_when: filebeat_cmd.rc not in [0,X]

Related

Telegraf SNMP plugin Error: IF-MIB::ifTable: Unknown Object Identifier

Steps followed to installed SNMP manager and agent on ec2
sudo apt-get update
sudo apt-get install snmp snmp-mibs-downloader
sudo apt-get update
sudo apt-get install snmpd
I opened sudo nano /etc/snmp/snmp.conf and commented the following line:
#mibs :
Then I went into the configuration file and modified file as below:
sudo nano /etc/snmp/snmpd.conf
Listen for connections from the local system only
agentAddress udp:127.0.0.1:161 <--- commented this part.
Listen for connections on all interfaces (both IPv4 and IPv6)
agentAddress udp:161,udp6:[::1]:161 <--remove the comment from this line to make it work.
using below command I can get snmp data
snmpwalk -v 2c -c public 127.0.0.1 .
From inside docker container as well I can get the data
snmpwalk -v 2c -c public host.docker.internal .
Docker-compose:
telegraf_snmp:
image: telegraf:1.22.1
container_name: telegraf_snmp
restart: always
depends_on:
- influxdb
networks:
- analytics
extra_hosts:
- "host.docker.internal:host-gateway"
# ports:
# - "161:161/udp"
volumes:
- /mnt/telegraf/snmp:/var/lib/telegraf
- ./etc/telegraf/snmp/:/etc/telegraf/snmp/
env_file:
- secrets.env
environment:
INFLUXDB_URL: http://influxdb:8086
command:
--config-directory /etc/telegraf/snmp/telegraf.d
--config /etc/telegraf/snmp/telegraf.conf
links:
- influxdb
logging:
options:
max-size: "10m"
max-file: "3"
Telegraf Input conf:
[[inputs.snmp]]
## Agent addresses to retrieve values from.
## format: agents = ["<scheme://><hostname>:<port>"]
## scheme: optional, either udp, udp4, udp6, tcp, tcp4, tcp6.
## default is udp
## port: optional
## example: agents = ["udp://127.0.0.1:161"]
## agents = ["tcp://127.0.0.1:161"]
## agents = ["udp4://v4only-snmp-agent"]
# agents = ["udp://127.0.0.1:161"]
agents = ["udp://host.docker.internal:161"]
## Timeout for each request.
timeout = "15s"
## SNMP version; can be 1, 2, or 3.
version = 2
## SNMP community string.
community = "public"
## Agent host tag
# agent_host_tag = "agent_host"
## Number of retries to attempt.
retries = 3
## The GETBULK max-repetitions parameter.
# max_repetitions = 10
## SNMPv3 authentication and encryption options.
##
## Security Name.
# sec_name = "myuser"
## Authentication protocol; one of "MD5", "SHA", or "".
# auth_protocol = "MD5"
## Authentication password.
# auth_password = "pass"
## Security Level; one of "noAuthNoPriv", "authNoPriv", or "authPriv".
# sec_level = "authNoPriv"
## Context Name.
# context_name = ""
## Privacy protocol used for encrypted messages; one of "DES", "AES", "AES192", "AES192C", "AES256", "AES256C", or "".
### Protocols "AES192", "AES192", "AES256", and "AES256C" require the underlying net-snmp tools
### to be compiled with --enable-blumenthal-aes (http://www.net-snmp.org/docs/INSTALL.html)
# priv_protocol = ""
## Privacy password used for encrypted messages.
# priv_password = ""
## Add fields and tables defining the variables you wish to collect. This
## example collects the system uptime and interface variables. Reference the
## full plugin documentation for configuration details.
[[inputs.snmp.field]]
oid = "RFC1213-MIB::sysUpTime.0"
name = "uptime"
[[inputs.snmp.field]]
oid = "RFC1213-MIB::sysName.0"
name = "source"
is_tag = true
[[inputs.snmp.table]]
oid = "IF-MIB::ifTable"
name = "interface"
inherit_tags = ["source"]
[[inputs.snmp.table.field]]
oid = "IF-MIB::ifDescr"
name = "ifDescr"
is_tag = true
Telegraf logs:
Cannot find module (IF-MIB): At line 1 in (none)
IF-MIB::ifTable: Unknown Object Identifier: exit status 2
2022-09-09T10:10:09Z I! Starting Telegraf 1.22.1
2022-09-09T10:10:09Z I! Loaded inputs: snmp
2022-09-09T10:10:09Z I! Loaded aggregators:
2022-09-09T10:10:09Z I! Loaded processors:
2022-09-09T10:10:09Z I! Loaded outputs: file influxdb_v2
2022-09-09T10:10:09Z I! Tags enabled: host=7a38697f4527
2022-09-09T10:10:09Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"7a38697f4527", Flush Interval:10s
2022-09-09T10:10:09Z E! [telegraf] Error running agent: could not initialize input inputs.snmp: initializing table interface: translating: MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf
Cannot find module (IF-MIB): At line 1 in (none)
IF-MIB::ifTable: Unknown Object Identifier: exit status 2
2022-09-09T10:10:11Z I! Starting Telegraf 1.22.1
2022-09-09T10:10:11Z I! Loaded inputs: snmp
2022-09-09T10:10:11Z I! Loaded aggregators:
2022-09-09T10:10:11Z I! Loaded processors:
2022-09-09T10:10:11Z I! Loaded outputs: file influxdb_v2
2022-09-09T10:10:11Z I! Tags enabled: host=7a38697f4527
2022-09-09T10:10:11Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"7a38697f4527", Flush Interval:10s
2022-09-09T10:10:11Z E! [telegraf] Error running agent: could not initialize input inputs.snmp: initializing table interface: translating: MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf
Cannot find module (IF-MIB): At line 1 in (none)
IF-MIB::ifTable: Unknown Object Identifier: exit status 2
But in telegraf I get above error.
I checked the mibs directory using ls /usr/share/snmp/mibs
I cannot find IF-MIB file here even after installing
$ sudo apt-get install snmp-mibs-downloader
$ sudo download-mibs
How can I resolve this issue ? Do I need to follow some additional steps ?
SNMP Plugin in telegraf should able to pull the data from SNMP

log4j temporary fix in elasticsearch helm chart using Dlog4j2.formatMsgNoLookups

I was trying to setup an elasticsearch cluster in AKS using helm chart but due to the log4j vulnerability, I wanted to set it up with option -Dlog4j2.formatMsgNoLookups set to true. I am getting unknown flag error when I pass the arguments in helm commands.
Ref: https://artifacthub.io/packages/helm/elastic/elasticsearch/6.8.16
helm upgrade elasticsearch elasticsearch --set imageTag=6.8.16 esJavaOpts "-Dlog4j2.formatMsgNoLookups=true"
Error: unknown shorthand flag: 'D' in -Dlog4j2.formatMsgNoLookups=true
I have also tried to add below in values.yaml file
esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
log4j2.properties: |
-Dlog4j2.formatMsgNoLookups = true
but the values are not adding to the /usr/share/elasticsearch/config/jvm.options, /usr/share/elasticsearch/config/log4j2.properties or in the environment variables.
First of all, here's a good source of knowledge about mitigating Log4j2 security issue if this is the reason you reached here.
Here's how you can write your values.yaml for the Elasticsearch chart:
esConfig:
log4j2.properties: |
logger.discovery.name = org.elasticsearch.discovery
logger.discovery.level = debug
A ConfigMap will be generated by Helm:
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-master-config
...
data:
log4j2.properties: |
logger.discovery.name = org.elasticsearch.discovery
logger.discovery.level = debug
And the Log4j configuration will be mount to your Elasticsearch as:
...
volumeMounts:
...
- name: esconfig
mountPath: /usr/share/elasticsearch/config/log4j2.properties
subPath: log4j2.properties
Update: How to set and add multiple configuration files.
You can setup other ES configuration files in your values.yaml, all the files that you specified here will be part of the ConfigMap, each of the files will be mounted at /usr/share/elasticsearch/config/ in the Elasticsearch container. Example:
esConfig:
elasticsearch.yml: |
node.master: true
node.data: true
log4j2.properties: |
logger.discovery.name = org.elasticsearch.discovery
logger.discovery.level = debug
jvm.options: |
# You can also place a comment here.
-Xmx1g -Xms1g -Dlog4j2.formatMsgNoLookups=true
roles.yml: |
click_admins:
run_as: [ 'clicks_watcher_1' ]
cluster: [ 'monitor' ]
indices:
- names: [ 'events-*' ]
privileges: [ 'read' ]
field_security:
grant: ['category', '#timestamp', 'message' ]
query: '{"match": {"category": "click"}}'
ALL of the configurations above are for illustration only to demonstrate how to add multiple configuration files in the values.yaml. Please substitute these configurations with your own settings.
if you update and put a value under esConfig, you will need to remove the curly brackets
esConfig:
log4j2.properties: |
key = value
I would rather suggest to change the /config/jvm.options file and at the end add
-Dlog4j2.formatMsgNoLookups=true
The helm chart has an option to set java options.
esJavaOpts: "" # example: "-Xmx1g -Xms1g"
In your case, setting it like this should be the solution:
esJavaOpts: "-Dlog4j2.formatMsgNoLookups=true"
As I see in updated in elastic repository values.yml:
esConfig: {}
log4j2.properties: |
key = value
Probably need to uncomment log4j2.properties part.

Trying to set logstash conf file in docker-compose.yml on Mac OS

Here is what I have specified in my yml for the logstash. I've tried multiple variations of quotes, no quotes, etc:
volumes:
- "./logstash:/etc/logstash/conf:ro"
command:
- "logstash -f /etc/logstash/conf/simplels.conf"
And simplels.conf contains this:
input {
stdin{}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout{}
}
Overall file structure is this, I'm running docker-compose up from the docker folder and getting Exit 1 on the Logstash container due to my 'command' parameter:
/docker:
docker-compose.yml
/logstash
simplels.conf

Not able to see newly added log in docker ELK

I'm using sebp/elk's dockerised ELK. I've managed to get it running on my local machine and I'm trying to input dummy log data by SSH'ing into the docker container and running:
/opt/logstash/bin/logstash --path.data /tmp/logstash/data \
-e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'
After typing in some random text, I cannot see it indexed by elasticsearch when I visit http://localhost:9200/_search?pretty&size=1000

Curl returns Invalid JSON error in a Jenkins Pipeline script but returns the expected response on a bash shell run or in a Jenkins Freestyle job

I am writing a Jenkins Pipeline job for setting up AWS infrastructure using API calls to our in-house AWS CLI wrapper library. Running the raw bash scripts on a CentOS box or as a Jenkins Freestyle job runs fine. However, it fails in the context of a Pipeline job. I think that the quotes may need to be different for the Pipeline job but I am not sure how.
After further investigation, I found that the curl command returns the wrong response from the service when running the scripts within a Jenkins Pipeline job.
pipeline {
agent any
stages {
stage('Checkout code from Git'){
steps {
echo "Checkout code from a GitHub repository"
// Checkout code from a GitHub repository
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [[$class: 'SubmoduleOption', disableSubmodules: false, parentCredentials: false, recursiveSubmodules: true, reference: '', trackingSubmodules: false]], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'xxxx', url: 'git#github.com:bbc/repo.git']]])
}
}
stage('Call our internal AWS CLI Wrapper System API to perform an ACTION on a specified ENVIRONMENT') {
steps {
script {
if("${params.ENVIRONMENT}" == 'int' && "${params.ACTION}" == 'create'){
echo "ENVIRONMENT=${params.ENVIRONMENT}, ACTION=${params.ACTION}"
echo ""
sh '''#!/bin/bash
# Create Neptune Cluster for the Int environment
cd blah-db
echo "Current working directory is $PWD"
CLOUD_FORMATION_FILE=$PWD/infrastructure/templates/neptune-cluster.json
echo "The CloudFormation file to operate on is $CLOUD_FORMATION_FILE"
echo "Running jq to transform the source CloudFormation file"
template=$(jq -M '.Parameters.Env.Default="int"' $CLOUD_FORMATION_FILE)
echo "Echoing the transformed CloudFormation file: \n$template"
echo "Running curl to make the http request to our internal AWS CLI Wrapper System"
curl -d "{\"aws_account\": \"1111111111\", \"region\": \"us-east-1\", \"name_suffix\": \"cluster\", \"template\": $template}" \
-H 'Content-Type: application/json' -H 'Accept: application/json' https://base.api.url/v1/services/blah-neptune/int/stacks \
--cert /path/to/client/certificate/client.crt --key /path/to/client/private-key/client.key
cd ..
pwd
# Set a timer to run for 300 seconds or 5 minutes to create a delay to allow for the Neptune Cluster to be fully provisioned first before adding instances to it.
'''
}
}
}
}
}
}
The actual result that I get from making the API call:
{"error": "Invalid JSON. Expecting property name: line 1 column 1 (char 1)"}
try change the curl as following:
curl -d '{"aws_account": "1111111111", "region": "us-east-1", "name_suffix": "cluster", "template": $template}'
Or assign the whole cmd to a variable and print it out to see it's as your wanted or not.
cmd = '''#!/bin/bash
cd blah-db
...
'''
echo cmd // compare the output string to the cmd of freestyle job.
sh cmd

Resources