How to generate Elasticsearch security users via Ansible - elasticsearch

We just installed Elasticsearch 7.x. We want to use the x-pack security module. We already automated everything via Ansible but we have a problem creating/setting the built in users with password:
ElsticSearch how to:
Run on system: /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive or auto.
Then you are asked for a password for each user in ElasticSearch. Are there any possibilities to automate this? Like some autoanswer question in Ansible or anything else?
Thanks

You can try to use interactive mode and ansible expect module: https://docs.ansible.com/ansible/latest/modules/expect_module.html

hosts: all
name: "Elasticsearch with SSL/TLS enabled"
roles:
-
role: elastic.elasticsearch
vars:
es_api_port: 9200
es_config:
action.destructive_requires_name: true
bootstrap.memory_lock: true
cluster.name: lab
discovery.seed_hosts: "0.0.0.0:9300"
discovery.type: single-node
http.port: 9200
indices.query.bool.max_clause_count: 8192
network.host: "0.0.0.0"
node.data: true
node.master: true
node.ml: false
node.name: lab1
reindex.remote.whitelist: "*:*"
search.max_buckets: 250000
transport.port: 9300
xpack.ilm.enabled: true
xpack.ml.enabled: false
xpack.monitoring.collection.enabled: true
xpack.monitoring.collection.interval: 30s
xpack.monitoring.enabled: true
xpack.security.audit.enabled: false
#xpack.security.enabled: true
xpack.sql.enabled: true
xpack.watcher.enabled: false
es_api_basic_auth_username: "elastic"
es_api_basic_auth_password: "changeme"
es_data_dirs:
- /opt/elasticsearch/data
es_heap_size: 2g
es_plugins:
-
plugin: ingest-attachment
es_validate_certs: false
es_version: "7.17.0"
es_users:
native:
elastic:
password: helloakash1234
kibana_system:
password: hellokibana1234
logstash_system:
password: hellologs1234
This works fine for me!!
es_users:
native:
elastic:
password: helloakash1234
With the above mentioned code the username will be "elastic" and the password will be "helloakash1234"

If you use the auto mode, then random passwords are generated and written to the console that you can maybe read.
Another solution is to call the Change password API in order to change user passwords after the fact.

Related

How to create multiple filebeats dashboard in Kibana

I have multiple Filebeats are running in multiple systems with custom index name. Filebeat send data to Logstash then logstash send data to Elasticsearch. Every thing is working fine logs are showing in Discovery tab. But the issue is when i trying to load dashboard in kibana by using 'Filebeat setup -e' the dashboards are not getting load in it it and showing the error(Image is attached)
image
image1353×453 24 KB
Filebeat.yml
filebeat.inputs:
type: log
enabled: true
paths:
/var/log/.log
filebeat.config.modules:
path: ${path.config}/modules.d/.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.dashboards.enabled: true
setup.dashboards.index: "care-stagging-"
setup.kibana:
host: "http://xx.xx.xx.xx:5601"
username: "elastic"
password: "VKkLOmFXUupzgXNnahp"
ssl.verification_mode: none
output.logstash:
hosts: ["xx.xx.xx.xx:5044"]
index: "care-stagging"
setup.template.name: "care-stagging"
setup.template.pattern: "care-stagging-"
setup.ilm.enabled: false
setup.template.enabled: true
setup.template.overwrite: false
processors:
add_fields:
fields:
host.ip: "xx.xx.xx.xx"
logging.metrics.period: 30s
Please share how can i load multiples filebeat dashboards in kibana

Deployment of Elasticsearch via helm chart not working.(Pod is not ready yet)

I am deploying EFK stack using elastic repo's helm charts. Elasticsearch pods are running into continuous errors.
**kubectl logs <pod-name> output**
java.lang.IllegalArgumentException: unknown setting [node.ml] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
elasticsearch.yml:
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
**Roles enabled in Values.yaml:**
roles:
master: "true"
ingest: "true"
data: "true"
remote_cluster_client: "true"
ml: "true"

How to configure Filebeat to read log files, using ELK stack?

I am new to nowadays ELK stack.
I need to have an ability to read logs from path, using ElasticSearch, Kibana and Filebeat.
I've tried to configure them step by step with ELK guides. But I still cannot see my logs in Kibana.
Now I work only with localhost.
Here is how my .yml files are configured:
elasticsearch.yml:
xpack.security.enabled: true
kibana.yml:
elasticsearch.username: "elastic"
elasticsearch.password: "elastic1"
filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- C:\\logs\\*.log
- type: filestream
enabled: false
paths:
- C:\logs\*
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
host: "localhost:5601"
username: "kibana_system"
password: "kibana_system1"
output.elasticsearch:
hosts: ["localhost:9200"]
username: "elastic"
password: "elastic1"
setup.kibana:
host: "localhost:5601"
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
So I execute ElasticSearch and Kibana. It's OK. I set up Filebeat, using PowerShell like in guide. Many dashboards are being loaded. But I can't see anything, related to my logs in Discovery tab...
Tell me, please, if I do anything wrong, or may be I need to configure the files more deeply?

Elasticsearch - http settings how to set up on kubernetes

I have Elasticsearch installed on kubernetes.
Could you tell me how can I set up this option: http.max_content_length
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: elasticsearch-sample
spec:
version: 7.5.0
nodeSets:
- name: default
config:
node.master: true
node.data: true
node.ingest: true
node.ml: true
http.max_content_length: 300 <--is this a proper place ?
count: 3
Yes, but you are missing a unit of this size, please add it as well.
http.max_content_length: 300mb --> note `mb`

KIBANA and ELASTICSEARCH config CA

im new on ELK stack,i am lead to Kibana Alert config but i got stuck at Kibana and Elastic search CA step when follow this link: https://www.elastic.co/guide/en/kibana/7.x/configuring-tls.html#configuring-tls-kib-es
elasticsearch.yml
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: transport_key.p12
xpack.security.transport.ssl.truststore.path: transport_key.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: http.p12
kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["https://localhost:9200"]
elasticsearch.username: "kibana"
elasticsearch.password: "kibana_password"
kibana.index: ".kibana"
xpack.security.encryptionKey: "32 random letters"
csp.strict: true
xpack.encryptedSavedObjects.encryptionKey: "32 random letters"
server.ssl.enabled: true
server.ssl.certificate: "/path/to/kibana-server.crt"
server.ssl.key: "/path/to/kibana-server.key"
elasticsearch.ssl.certificateAuthorities: [ "path/to/config/elasticsearch-ca.pem" ]
when i started kibana by./bin/kibana promt show me:
enter image description here
I run on: ubuntu 18.04.4 LTS, and ELL run install dicrectly. Please tell me what i was wrong.

Resources