KIBANA and ELASTICSEARCH config CA - elasticsearch

im new on ELK stack,i am lead to Kibana Alert config but i got stuck at Kibana and Elastic search CA step when follow this link: https://www.elastic.co/guide/en/kibana/7.x/configuring-tls.html#configuring-tls-kib-es
elasticsearch.yml
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: transport_key.p12
xpack.security.transport.ssl.truststore.path: transport_key.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: http.p12
kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["https://localhost:9200"]
elasticsearch.username: "kibana"
elasticsearch.password: "kibana_password"
kibana.index: ".kibana"
xpack.security.encryptionKey: "32 random letters"
csp.strict: true
xpack.encryptedSavedObjects.encryptionKey: "32 random letters"
server.ssl.enabled: true
server.ssl.certificate: "/path/to/kibana-server.crt"
server.ssl.key: "/path/to/kibana-server.key"
elasticsearch.ssl.certificateAuthorities: [ "path/to/config/elasticsearch-ca.pem" ]
when i started kibana by./bin/kibana promt show me:
enter image description here
I run on: ubuntu 18.04.4 LTS, and ELL run install dicrectly. Please tell me what i was wrong.

Related

Configuring security on elasticsearch with helm charts

helllo everyone i have elk deployed on k8s cluster using helm charts "7.17.1".
i'm trying to set up security for elasticsearch, i added these lines in the elasticsearch yaml file
esConfig:
elasticsearch.yml: |
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
and i didn't know how to create a certificate since i can't access the pod to create it there.
any solution would be appreciated since i've been stuck for 2 weeks.

Deployment of Elasticsearch via helm chart not working.(Pod is not ready yet)

I am deploying EFK stack using elastic repo's helm charts. Elasticsearch pods are running into continuous errors.
**kubectl logs <pod-name> output**
java.lang.IllegalArgumentException: unknown setting [node.ml] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
elasticsearch.yml:
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
**Roles enabled in Values.yaml:**
roles:
master: "true"
ingest: "true"
data: "true"
remote_cluster_client: "true"
ml: "true"

How to configure Filebeat to read log files, using ELK stack?

I am new to nowadays ELK stack.
I need to have an ability to read logs from path, using ElasticSearch, Kibana and Filebeat.
I've tried to configure them step by step with ELK guides. But I still cannot see my logs in Kibana.
Now I work only with localhost.
Here is how my .yml files are configured:
elasticsearch.yml:
xpack.security.enabled: true
kibana.yml:
elasticsearch.username: "elastic"
elasticsearch.password: "elastic1"
filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- C:\\logs\\*.log
- type: filestream
enabled: false
paths:
- C:\logs\*
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
host: "localhost:5601"
username: "kibana_system"
password: "kibana_system1"
output.elasticsearch:
hosts: ["localhost:9200"]
username: "elastic"
password: "elastic1"
setup.kibana:
host: "localhost:5601"
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
So I execute ElasticSearch and Kibana. It's OK. I set up Filebeat, using PowerShell like in guide. Many dashboards are being loaded. But I can't see anything, related to my logs in Discovery tab...
Tell me, please, if I do anything wrong, or may be I need to configure the files more deeply?

How to generate Elasticsearch security users via Ansible

We just installed Elasticsearch 7.x. We want to use the x-pack security module. We already automated everything via Ansible but we have a problem creating/setting the built in users with password:
ElsticSearch how to:
Run on system: /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive or auto.
Then you are asked for a password for each user in ElasticSearch. Are there any possibilities to automate this? Like some autoanswer question in Ansible or anything else?
Thanks
You can try to use interactive mode and ansible expect module: https://docs.ansible.com/ansible/latest/modules/expect_module.html
hosts: all
name: "Elasticsearch with SSL/TLS enabled"
roles:
-
role: elastic.elasticsearch
vars:
es_api_port: 9200
es_config:
action.destructive_requires_name: true
bootstrap.memory_lock: true
cluster.name: lab
discovery.seed_hosts: "0.0.0.0:9300"
discovery.type: single-node
http.port: 9200
indices.query.bool.max_clause_count: 8192
network.host: "0.0.0.0"
node.data: true
node.master: true
node.ml: false
node.name: lab1
reindex.remote.whitelist: "*:*"
search.max_buckets: 250000
transport.port: 9300
xpack.ilm.enabled: true
xpack.ml.enabled: false
xpack.monitoring.collection.enabled: true
xpack.monitoring.collection.interval: 30s
xpack.monitoring.enabled: true
xpack.security.audit.enabled: false
#xpack.security.enabled: true
xpack.sql.enabled: true
xpack.watcher.enabled: false
es_api_basic_auth_username: "elastic"
es_api_basic_auth_password: "changeme"
es_data_dirs:
- /opt/elasticsearch/data
es_heap_size: 2g
es_plugins:
-
plugin: ingest-attachment
es_validate_certs: false
es_version: "7.17.0"
es_users:
native:
elastic:
password: helloakash1234
kibana_system:
password: hellokibana1234
logstash_system:
password: hellologs1234
This works fine for me!!
es_users:
native:
elastic:
password: helloakash1234
With the above mentioned code the username will be "elastic" and the password will be "helloakash1234"
If you use the auto mode, then random passwords are generated and written to the console that you can maybe read.
Another solution is to call the Change password API in order to change user passwords after the fact.

Connect kibana to elasticsearch in kubernetes cluster

I have a running elasticsearch cluster and I am trying to connect kibana to this cluster (same node). Currently the page hangs when I try to open the service in my browser using :. . In my kibana pod logs, the last few log messages in the pod are:
{"type":"log","#timestamp":"2017-10-13T17:23:46Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","#timestamp":"2017-10-13T17:23:46Z","tags":["status","ui settings","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2017-10-13T17:23:49Z","tags":["status","plugin:ml#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Request Timeout after 3000ms","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
My kibana.yml file that is mounted into the kibana pod has the following config:
server.name: kibana-logging
server.host: 0.0.0.0
elasticsearch.url: http://elasticsearch:9300
xpack.security.enabled: false
xpack.monitoring.ui.container.elasticsearch.enabled: true
and my elasticsearch.yml file has the following config settings (I have 3 es pods)
cluster.name: elasticsearch-logs
node.name: ${HOSTNAME}
network.host: 0.0.0.0
bootstrap.memory_lock: false
xpack.security.enabled: false
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ["172.17.0.3:9300", "172.17.0.4:9300", "172.17.0.4:9300"]
I feel like the issue is currently with the network.host field but I'm not sure. What fields am I missing/do I need to modify in order to connect to a kibana pod to elasticsearch if they are in the same cluster/node? Thanks!
ES Service:
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
component: elasticsearch
role: master
spec:
type: NodePort
selector:
component: elasticsearch
role: master
ports:
- name: http
port: 9200
targetPort: 9200
nodePort: 30303
protocol: TCP
Kibana Svc
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: default
labels:
component: kibana
spec:
type: NodePort
selector:
component: kibana
ports:
- port: 80
targetPort: 5601
protocol: TCP
EDIT:
After changing port to 9200 in kibana.yml here is what i see in the logs at the end when I try and access kibana:
{"type":"log","#timestamp":"2017-10-13T21:36:30Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","#timestamp":"2017-10-13T21:36:30Z","tags":["status","ui settings","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2017-10-13T21:36:33Z","tags":["status","plugin:ml#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Request Timeout after 3000ms","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2017-10-13T21:37:02Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nPOST http://elasticsearch:9200/.reporting-*/esqueue/_search?version=true => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2017-10-13T21:37:32Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2017-10-13T21:37:33Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2017-10-13T21:37:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2017-10-13T21:37:38Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2017-10-13T21:37:42Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
The issue here is that you exposed Elasticsearch on port 9200 but are trying to connect to port 9300 in your kibana.yml file.
You either need to edit your kibana.yml file to use:
elasticsearch.url: http://elasticsearch:9200
Or change the port in the elasticsearch service to 9300.

Resources