How to add a basic user/pass authentication for ElasticSearch - elasticsearch

I deployed Elasticsearch with the following the page below to my Azure Kubernetes environment.
https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-deploy-elasticsearch.html
It works fine.
But I want to add a basic user/password authentication for Elasticsearch page. I really don't get it why it's so complicated and needs to Google it.
Then I checked this page;
https://www.elastic.co/guide/en/elasticsearch/reference/current/get-started-enable-security.html
I guess I need to add "xpack.security.enabled: true" to elasticsearch.yaml file, but to where? How can I do that? I c/p and put the yaml file and it didn't worked.
https://www.elastic.co/guide/en/elasticsearch/reference/current/get-started-enable-security.html
Then the documentation below mentioned about the creating passwords for built-in users, but they mentioned only manual installations, not sure how to do with Kubernetes?
`
https://www.elastic.co/guide/en/elasticsearch/reference/current/get-started-built-in-users.html
`
Is there any basic documentation available for creating authentication on Elasticsearch? How can I do that?
Regards.

You can do it by installing elasticsearch using helm chart and modifying values.yaml. It allows you to modify elasticsearch.yaml.
You can enable xpack.security.enabled: true with following configuration:
esConfig: {}
elasticsearch.yml: |
xpack.security.enabled: true

Related

Datadog skip ingestion of Spring actuator health endpoint

I was trying to configure my application to not report my health endpoint in datadog APM./ I checked the documentation here: https://docs.datadoghq.com/tracing/guide/ignoring_apm_resources/?tab=kuberneteshelm&code-lang=java
And tried adding the config in my helm deployment.yaml file:
- name: DD_APM_IGNORE_RESOURCES
value: GET /actuator/health
This had no effect. Traces were still showing up in datadog. The method and path are correct. I changed the value a few times with different combinations (tried a few regex options). No go.
The I tried the DD_APM_FILTER_TAGS_REJECT environment variable, trying to ignore http.route:/actuator/health. Also without success.
I even ran the agent and application locally to see if there was anything to do with the environment, but the configs were not applied.
What are more options to try in this scenario?
This is the span detail:

ElasticSearch on local machine Windows 10 asking Username & Password

I am just started exploring about the Elasticsearch + Kibana + Logstash combined as my requirement of integration this with other tool chains.
I have tried to successfully downloading of ElasticSearch & Kibana from official websites.
https://www.elastic.co/downloads/kibana
https://www.elastic.co/downloads/elasticsearch
And I am able to start the ElasticSearch as well.
When I go to browser to access this it is asking for me to enter credentials.
I saw lots of tutorials on youtube no one faced this problem.
Need to know what settings of configuration needs to apply here ?
My OS is : Windows-10
Thanks in advance !!
Adding below two lines in \elasticsearch-8.2.2\config\elasticsearch.yml
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true

Elastic APM different index name

As of a few weeks ago we added filebeat, metricbeat and apm to our dotnet core application ran on our kubernetes cluster.
It works all nice and recently we discovered filebeat and metricbeat are able to write a different index upon several rules.
We wanted to do the same for APM, however searching the documentation we can't find any option to set the name of the index to write to.
Is this even possible, and if yes how is it configured?
I also tried finding the current name apm-* within the codebase but couldn't find any matches upon configuring it.
The problem which we'd like to fix is that every space in kibana gets to see the apm metrics of every application. Certain applications shouldn't be within this space so therefore i thought a new apm-application-* index would do the trick...
Edit
Since it shouldn't be configured on the agent but instead in the cloud service console. I'm having troubles to 'user-override' the settings to my likings.
The rules i want to have:
When an application does not live inside the kubernetes namespace default OR kube-system write to an index called apm-7.8.0-application-type-2020-07
All other applications in other namespaces should remain in the default indices
I see you can add output.elasticsearch.indices to make this happen: Array of index selector rules supporting conditionals and formatted string.
I tried this by copying the same i had for metricbeat and updated it to use the apm syntax and came to the following 'user-override'
output.elasticsearch.indices:
- index: 'apm-%{[observer.version]}-%{[kubernetes.labels.app]}-%{[processor.event]}-%{+yyyy.MM}'
when:
not:
or:
- equals:
kubernetes.namespace: default
- equals:
kubernetes.namespace: kube-system
but when i use this setup it tells me:
Your changes cannot be applied
'output.elasticsearch.indices.when': is not allowed
Set output.elasticsearch.indices.0.index to apm-%{[observer.version]}-%{[kubernetes.labels.app]}-%{[processor.event]}-%{+yyyy.MM}
Set output.elasticsearch.indices.0.when.not.or.0.equals.kubernetes.namespace to default
Set output.elasticsearch.indices.0.when.not.or.1.equals.kubernetes.namespace to kube-system
Then i updated the example but came to the same conclusion as it was not valid either..
In your ES Cloud console, you need to Edit the cluster configuration, scroll to the APM section and then click "User override settings". In there you can override the target index by adding the following property:
output.elasticsearch.index: "apm-application-%{[observer.version]}-{type}-%{+yyyy.MM.dd}"
Note that if you change this setting, you also need to modify the corresponding index template to match the new index name.

Secure built-in user credentials for Kibana/ElasticSearch

Setup
ElasticSearch v6.8
Context
I'm trying to build a couple of AMI's for ElasticSearch and Kibana using Packer.
I've been reading the official docs and have run into something confusing (for me at least)
I'm setting up the built-in users in ElasticSearch according to this doc. I'm using the auto option as opposed to interactive
bin/elasticsearch-setup-passwords auto
Once this is done I need to modify the kibana.yml file to use the built-in user whilst communicating with ElasticSearch. This doc describes what to do. Essentially you add these two lines:
elasticsearch.username: "kibana"
elasticsearch.password: "kibanapassword"
Questions
How can I automatically read the password output for the built in Kibana user (bin/elasticsearch-setup-passwords auto) so that I can add it to the kibana.yml file?
Is storing the password in plain text in the 'kibana.yml' file secure? I fear it is not... but is there an alternative?
Thanks
For elasticsearch-setup-passwords rather than using auto, look into --batch, so you can define the password and then use that for Kibana.
You probably want to use a keystore for Kibana.

Elasticsearch Disable Delete indexes

I am using Elasticsearch 1.7.1
After I create my indexes, I do not want to delete existing indexes. (Either manually or by some unintentional execution from my es.)
Is it possible to set any configurations in elasticsearch and restart the service to achieve the above?
I have tried these steps but it is not helping me out.
As for preventing index deletion via a wildcard /* or /_all, one thing i can do is to add the following settings to your config file:
action.destructive_requires_name: true
I have solve this by http CORS elasticsearch.

Resources