Configuring security on elasticsearch with helm charts - elasticsearch

helllo everyone i have elk deployed on k8s cluster using helm charts "7.17.1".
i'm trying to set up security for elasticsearch, i added these lines in the elasticsearch yaml file
esConfig:
elasticsearch.yml: |
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
and i didn't know how to create a certificate since i can't access the pod to create it there.
any solution would be appreciated since i've been stuck for 2 weeks.

Related

Provision ILM policy for Elastic Agent (Fleet) streams

I'm new to Elastic Agent configuration and running the latest ECK build on Kubernetes.
While I really like to visibility Fleet offers, it seems I'm unable to provision custom ILM policies like I can with Beats, for example:
setup.ilm:
enabled: true
overwrite: true
policy_name: "filebeat"
policy_file: /usr/share/filebeat/ilm-policy.json
pattern: "{now/d}-000001"
Is there a way I can create a policy that does daily rollovers and only keeps the data for 90 days, preferably in code like the above block?
I've tried to add a ilm config to the Elasticsearch config block, but you can't provide this config to Elasticsearch.
config:
node.roles: ["data", "ingest", "remote_cluster_client"]
setup.ilm:
enabled: true
overwrite: true
policy_name: "logs"
policy_file: /usr/share/elasticsearch/ilm-policy.json
pattern: "{now/d}-000001"

Deployment of Elasticsearch via helm chart not working.(Pod is not ready yet)

I am deploying EFK stack using elastic repo's helm charts. Elasticsearch pods are running into continuous errors.
**kubectl logs <pod-name> output**
java.lang.IllegalArgumentException: unknown setting [node.ml] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
elasticsearch.yml:
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
**Roles enabled in Values.yaml:**
roles:
master: "true"
ingest: "true"
data: "true"
remote_cluster_client: "true"
ml: "true"

Automated Setup of Kibana and Elasticsearch with Filebeat Module in Elastic Cloud for Kubernetes (ECK)

I'm trying out the K8s Operator (a.k.a. ECK) and so far, so good.
However, I'm wondering what the right pattern is for, say, configuring Kibana and Elasticsearch with the Apache module.
I know I can do it ad hoc with:
filebeat setup --modules apache2 --strict.perms=false \
--dashboards --pipelines --template \
-E setup.kibana.host="${KIBANA_URL}"
But what's the automated way to do it? I see some docs for the Kibana dashboard portion of it but what about the rest (pipelines, etc.)?
Note: At some point, I may end up actually running a beat for the K8s cluster, but I'm not at that stage yet. At the moment, I just want to set Elasticsearch/Kibana up with the Apache module additions so that external Apache services' Filebeats can get ingested/displayed properly.
FYI, I'm on version 6.8 of the Elastic stack for now.
you can try auto-discovery using label based approach.
config:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.default_config.enabled: "false"
templates:
- condition.contains:
kubernetes.labels.app: "apache"
config:
- module: apache
access:
enabled: true
var.paths: ["/path/to/log/apache/access.log*"]
error:
enabled: true
var.paths: ["/path/to/log/apache/error.log*"]

How to validate against any pod security policy during deployment of elasticsearch

I have deployed the Bitnami helm chart of elasticsearch on the Kubernetes environment.
https://github.com/bitnami/charts/tree/master/bitnami/elasticsearch
Unfortunately, I am getting the following error for the coordinating-only pod. However, the cluster is restricted.
Pods "elasticsearch-elasticsearch-coordinating-only-5b57786cf6-" is forbidden: unable to validate against any pod security policy:
[spec.initContainers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]; Deployment does not have minimum availability.
I there anything I need to adapt/add-in default values.yaml?
Any suggestion to get rid of this error?
Thanks.
You can't validate if your cluster is restricted with some security policy. In your situation someone (assuming administrator) has blocked the option to run privileged containers for you.
Here's an example of how pod security policy blocks privileged containers:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: example
spec:
privileged: false # Don't allow privileged pods!
seLinux:
rule: RunAsAny
----
What is require for you is to have appropriate Role with a PodSecurityPolicy resource and RoleBinding that will allow you to run privileged containers.
This is very well explained in kubernetes documentation at Enabling pod security policy
so the solution was to set the following parameter in values.yaml file then deploy simply.
Don't need to create any role or pod security policy.
sysctlImage:
enabled: false
curator:
enabled: true
rbac:
# Specifies whether RBAC should be enabled
enabled: true
psp:
# Specifies whether a podsecuritypolicy should be created
create: true
Also run this command on each node:
sysctl -w vm.max_map_count=262144 && sysctl -w fs.file-max=65536

Filebeat 7.5.1 Missing a step for custom index names config?

I should be able to see a new custom index name dev-7.5.1-yyyy.MM.dd within Kibana, but instead it is still called filebeat.
I have the following set it my filebeat.yml config file, which should allow me to see dev-* index in Kibana. However these are still indexing as filebeat-7.5.1-yyyy-MM.dd. Filebeat is running on a ec2 instance that can communicate with the elastic-search server on port 9200.
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.elasticsearch:
hosts: ["my_es_host_ip:9200"]
index: "dev-%{[agent.version]}-%{+yyyy.MM.dd}"
setup.template.enabled: true
setup.template.name: "dev-%{[agent.version]}"
setup.template.pattern: "dev-%{[agent.version]}-*"
Is there step I'm missing? From reading over the the docs and the reference file, I believe I have everything set correctly.
Appreciate any advice.
Looks like the following setting is needed in his case:
setup.ilm.enabled: false
After adding that to my config, it worked as expected. There is a open issue to improve how to find this out here: https://github.com/elastic/beats/issues/11866
Use these configuration to have customer index template and name.
setup.ilm.enabled: false
setup.template.name: "dev"
setup.template.pattern: "dev-*"
index: "dev-%{[agent.version]}-%{+yyyy.MM.dd}"
When index lifecycle management (ILM) is enabled, the default index is
"filebeat-%{[agent.version]}-%{+yyyy.MM.dd}-%{index_num}", for
example, "filebeat-7.13.0-2021-05-25-000001". Custom index settings
are ignored when ILM is enabled. If you’re sending events to a cluster
that supports index lifecycle management, see Index lifecycle
management (ILM) to learn how to change the index name.
If index lifecycle management is enabled (which is typically the
default), setup.template.name and setup.template.pattern are ignored.
Ref:
https://www.elastic.co/guide/en/beats/filebeat/current/change-index-name.html
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-template.html
https://www.elastic.co/guide/en/beats/filebeat/current/ilm.html

Resources