I am currently trying to forward GCP's Cloud Logging to Filebeat to be forwarded to Elasticsearch following this docs with the GCP module settings on filebeat according to this docs
Currently I am only trying to forward audit logs so my gcp.yml module is as follows
- module: gcp
vpcflow:
enabled: false
var.project_id: my-gcp-project-id
var.topic: gcp-vpc-flowlogs
var.subscription_name: filebeat-gcp-vpc-flowlogs-sub
var.credentials_file: ${path.config}/gcp-service-account-xyz.json
#var.internal_networks: [ "private" ]
firewall:
enabled: false
var.project_id: my-gcp-project-id
var.topic: gcp-vpc-firewall
var.subscription_name: filebeat-gcp-firewall-sub
var.credentials_file: ${path.config}/gcp-service-account-xyz.json
#var.internal_networks: [ "private" ]
audit:
enabled: true
var.project_id: <my prod name>
var.topic: sample_topic
var.subscription_name: filebeat-gcp-audit
var.credentials_file: ${path.config}/<something>.<something>
When I run sudo filebeat setup I keep getting this error
2021-05-21T09:02:25.232Z ERROR cfgfile/reload.go:258 Error loading configuration files: 1 error: Unable to hash given config: missing field accessing '0.firewall' (source:'/etc/filebeat/modules.d/gcp.yml')
Although I can start the service, but I don't seem to see any logs forwarded from GCP's Cloud Logging pub/sub topic to elastic search.
Help or tips on best practice too would be appreciated.
Update
If I were to follow the docs in here, it would give me the same error but in audit
Related
Filebeat Kubernetes cannot output to ElasticSearch,
ElasticSearch is OK.
filebeat is daemonset,relevant environment variables have been added.
filebeat.yml
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
hints.default_config:
enabled: false
type: container
paths:
- /var/log/containers/*-${data.container.id}.log
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
Kubernetes
Use nginx app to test:
image=nginx:latest
Deployment annotations have been added.
co.elastic.logs/enabled: "true"
pod.yaml (in node1)
But cannot output to ElasticSearch,Logs and indexes for related input are not seen.
filebeat pod(node1) logs
Expect the filebeat to collect logs for the specified container(Pod) to elasticsearch.
#baymax first off, you don't need to explicitly define the property anywhere:
co.elastic.logs/enabled: "true"
since filebeat, by default, reads all the container log files on the node.
Secondly, you are disabling hints.default_config which ensures filebeat will only read the log files of pods which are annotated as above; however, you haven't provided any template config to be used for reading such log files.
For more info, read: https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html
Thirdly, in your filebeat logs, do you see any harvester being started, handles created and events published ? Posting a snapshot of logs doesn't give a clear picture. May be try starting filebeat in debug mode for few minutes and paste the logs here in proper formatting.
I try to configure filebeat version 7.17.5 (amd64), libbeat 7.17.5, for reading Spring boot logs and sending them via logstash to elasticsearch. All works fine, logs are send and I can read it in Kibana but the problem is that I configured filebeat in file /etc/filebeat/filebeat.yml and defined there only one source of logs, but filebeat's still getting all logs from /var/log
It's my only one config for inputs:
filebeat.inputs:
- type: filestream
id: some_id
enabled: true
paths:
- "/var/log/dir_with_logs/application.log"
But when I check status of filebeat a have the information that:
[input] log/input.go:171 Configured paths: [/var/log/auth.log* /var/log/secure*]
And also I have logs from files: auth or secure in Kibana, which I don't want to have.
What I'm doing wrong or what I don't know what I should?
Based on the configured paths of /var/log/auth.log* and /var/log/secure*, I think this is the Filebeat system module. You can disable the system module by renaming /etc/filebeat/modules.d/system.yml to /etc/filebeat/modules.d/system.yml.disabled.
Alternatively you can run the filebeat modules command to disable the module (it simply renames the file for you).
filebeat modules disable system
I'm trying out the K8s Operator (a.k.a. ECK) and so far, so good.
However, I'm wondering what the right pattern is for, say, configuring Kibana and Elasticsearch with the Apache module.
I know I can do it ad hoc with:
filebeat setup --modules apache2 --strict.perms=false \
--dashboards --pipelines --template \
-E setup.kibana.host="${KIBANA_URL}"
But what's the automated way to do it? I see some docs for the Kibana dashboard portion of it but what about the rest (pipelines, etc.)?
Note: At some point, I may end up actually running a beat for the K8s cluster, but I'm not at that stage yet. At the moment, I just want to set Elasticsearch/Kibana up with the Apache module additions so that external Apache services' Filebeats can get ingested/displayed properly.
FYI, I'm on version 6.8 of the Elastic stack for now.
you can try auto-discovery using label based approach.
config:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.default_config.enabled: "false"
templates:
- condition.contains:
kubernetes.labels.app: "apache"
config:
- module: apache
access:
enabled: true
var.paths: ["/path/to/log/apache/access.log*"]
error:
enabled: true
var.paths: ["/path/to/log/apache/error.log*"]
I have a Kubernetes cluster and I am trying to deploy and configure heartbeat to monitor services.
When heartbeat starts, I see this in the logs:
2019-08-28T15:31:19.116Z ERROR kubernetes/util.go:90 kubernetes: Querying for pod failed with error: kubernetes api: Failure 404 pods "ip-172-20-64-197" not found
This is my autodiscover config:
heartbeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition:
equals:
kubernetes.namespace: myappnamespace
config:
- type: http
enabled: true
urls: ["${data.host}:${data.port}"]
schedule: "#every 1s"
I also see this in the logs which also suggests that the autodiscover isn't working properly.
2019-08-28T15:31:19.117Z DEBUG [kubernetes] kubernetes/watcher.go:189 Got 0 items from the resource sync
2019-08-28T15:31:19.117Z DEBUG [kubernetes] kubernetes/watcher.go:194 Done syncing 0 items from the resource sync
I've tried all sorts of combinations of the above configuration but to no avail. Can anyone point me in the right direction? Thanks.
I am using readonlyRest plugin to secure elastic and kibana but once I added the following in my readonlyrest.yml, the kibana starts giving me "Authentication Exception", what could be the reason for that?
kibana.yml
elasticsearch.username: "kibana"
elasticsearch.password: "kibana123"
readonlyrest.yml
readonlyrest:
enable: true
response_if_req_forbidden: Access denied!!!
access_control_rules:
- name: "Accept all requests from localhost"
type: allow
hosts: [XXX.XX.XXX.XXX]
- name: "::Kibana server::"
auth_key: kibana:kibana123
type: allow
- name: "::Kibana user::"
auth_key: kibana:kibana123
type: allow
kibana_access: rw
indices: [".kibana*","log-*"]
My kibana and elastic are hosted on same server, is that the reason?
Another question: If I want to make my elastic server accessible only through a particular host then can I write that host in the first section of access_control_rules as mentioned in readonlyrest.yml?
Elastic version: 6.2.3
Log error: I didn't remember exactly but it was [ACL] Forbidden and showing false in all the three control rules.