Firstly I create a single node ELK and I use this conig in my elasticsearch.yml
# sed '/^#/d' /etc/elasticsearch/elasticsearch.yml
node.name: "elk01"
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
xpack.security.enabled: true
discovery.type: single-node
after I use this command and create auto built-in users
sudo /usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto
and it's ok. Everything is working but I want elk-cluster. I create a new server and change config
elk01
# sed '/^#/d' /etc/elasticsearch/elasticsearch.yml
cluster.name: "elk-testcluster"
node.name: "elk01"
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["10.60.201.31", "10.60.201.32"]
cluster.initial_master_nodes: ["10.60.201.31"]
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
elk02
# sed '/^#/d' /etc/elasticsearch/elasticsearch.yml
cluster.name: "elk-testcluster"
node.name: "elk02"
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["10.60.201.31", "10.60.201.32"]
cluster.initial_master_nodes: ["10.60.201.31"]
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
Right now when I use curl with username/password, I can getting a elk01 but not elk02
# curl -XGET "10.60.201.31:9200" -u elastic:passcreatedonelk01
{
"name" : "elk01",
"cluster_name" : "elk-testcluster",
"cluster_uuid" : "7513Zor7S3SHqVFzs0hEMQ",
"version" : {
"number" : "7.17.4",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "79878662c54c886ae89206c685d9f1051a9d6411",
"build_date" : "2022-05-18T18:04:20.964345128Z",
"build_snapshot" : false,
"lucene_version" : "8.11.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
# curl -XGET "10.60.201.32:9200" -u elastic:passcreatedonelk01
{"error":{"root_cause":[{"type":"security_exception","reason":"unable to authenticate user [elastic] for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"unable to authenticate user [elastic] for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}
When I create a new elasticsearch-setup-password on elk02, it's getting error.
sudo /usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto
Failed to determine the health of the cluster running at http://10.60.201.32:9200
Unexpected response code [503] from calling GET http://10.60.201.32:9200/_cluster/health?pretty
Cause: master_not_discovered_exception
It is recommended that you resolve the issues with your cluster before running elasticsearch-setup-passwords.
It is very likely that the password changes will fail when run against an unhealthy cluster.
Do you want to continue with the password setup process [y/N]y
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y
Unexpected response code [503] from calling PUT http://10.60.201.32:9200/_security/user/apm_system/_password?pretty
Cause: Cluster state has not been recovered yet, cannot write to the [null] index
Possible next steps:
* Try running this tool again.
* Try running with the --verbose parameter for additional messages.
* Check the elasticsearch logs for additional error details.
* Use the change password API manually.
ERROR: Failed to set password for user [apm_system].
When I make a cluster, is the use of a common password not provided? Or is it because I run an elasticsearch-setup-password before doing a cluster?
Once you enable ssl you need to add a certificate and key for each node for transport layer.
you can follow these instructions
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup.html
Related
I referred this and this
Why do we need two fields to let Kibana know where is the monitoring data.
elasticsearch.hosts
monitoring.ui.elasticsearch.hosts
But when I give my monitoring cluster at either of these properties, it works. I assumed badly that elasticsearch.hosts is my actual production cluster than monitoring cluster.
Apart from why part, is my understanding correct about this integration attributes?
Any thoughts? Thanks.
Kibana.yml:
server.host: "ip.ad.re.ss"
#elasticsearch.hosts: ["http://host1:9200","http://host2:9200","FewMoreHosts"]
monitoring.ui.elasticsearch.hosts: ["http://MonitoringNode:9200"]
I haven't changed any part in elasticsearch.yml of monitoring node.
metricbeat.yml:
output.elasticsearch:
host: ["http://MonitoringNode:9200"]
setup.kibana:
host: kibanaHost
In modules.d/elasticsearch-xpack.yml, I left the default configurations.
elasticsearch.yml:
cluster.name: es_cluster
node.name: master-1
node.data: false
node.master: true
node.ingest: true
node.max_local_storage_nodes: 3
transport.tcp.port: 9300
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["master1.ip", "master2.ip","master3.ip"]
cluster.initial_master_nodes: ["master-1","master-2"]
Monitoring cluster yml:
network.host: 0.0.0.0
discovery.type: single-node
When I enable both properties in Kibana.yml, I get the below error in the log.
{
"type": "log",
"#timestamp": "2021-04-21T14:48:34-04:00",
"tags": [
"error",
"plugins",
"data",
"data",
"indexPatterns"
],
"pid": 29959,
"message": "Error: No indices match pattern \"metricbeat-*\"\n at createNoMatchingIndicesError (/usr/share/kibana/src/plugins/data/server/index_patterns/fetcher/lib/errors.js:45:29)\n at convertEsError (/usr/share/kibana/src/plugins/data/server/index_patterns/fetcher/lib/errors.js:71:12)\n at callFieldCapsApi (/usr/share/kibana/src/plugins/data/server/index_patterns/fetcher/lib/es_api.js:69:38)\n at runMicrotasks (<anonymous>)\n at processTicksAndRejections (internal/process/task_queues.js:93:5)\n at getFieldCapabilities (/usr/share/kibana/src/plugins/data/server/index_patterns/fetcher/lib/field_capabilities/field_capabilities.js:35:23)\n at IndexPatternsFetcher.getFieldsForWildcard (/usr/share/kibana/src/plugins/data/server/index_patterns/fetcher/index_patterns_fetcher.js:49:31)\n at IndexPatternsApiServer.getFieldsForWildcard (/usr/share/kibana/src/plugins/data/server/index_patterns/index_patterns_api_client.js:27:12)\n at IndexPatternsService.refreshFieldSpecMap (/usr/share/kibana/src/plugins/data/common/index_patterns/index_patterns/index_patterns.js:216:27)\n at IndexPatternsService.getSavedObjectAndInit (/usr/share/kibana/src/plugins/data/common/index_patterns/index_patterns/index_patterns.js:320:23) {\n data: null,\n isBoom: true,\n isServer: false,\n output: {\n statusCode: 404,\n payload: {\n statusCode: 404,\n error: 'Not Found',\n message: 'No indices match pattern \"metricbeat-*\"',\n code: 'no_matching_indices'\n },\n headers: {}\n }\n}"
}
But if I set only monitoring.ui.elasticsearch.hosts, Kibana shows the data.
The elasticsearch.hosts is the place where you will set the hosts where your data is stored, the data you want to query, this should be your production cluster.
The monitoring.ui.elasticsearch.hosts is the place where you will set the hosts of your monitoring cluster if you have a separated monitoring cluster.
Depending on the size of your cluster it is recommended to have a separated cluster just for monitoring, this could be a single-node cluster using the basic license, for example.
The security tokens that are used in these contexts are cluster-specific, therefore you cannot use a single Kibana instance to connect to both production and monitoring clusters
As mentioned in the documentation, I think that we need to have separate Kibana instance to collect data from both clusters.
I am not sure much about the security tokens.
I'm currently runnning Elasticsearch (ES) 5.5 inside a Docker container. (See below)
curl -XGET 'localhost:9200'
{
"name" : "THbbezM",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "CtYdgNUzQrS5YRTRT7xNJw",
"version" : {
"number" : "5.5.0",
"build_hash" : "260387d",
"build_date" : "2017-06-30T23:16:05.735Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
"tagline" : "You Know, for Search"
}
I've changed the elasticsearch.yml file to look like this:
http.host: 0.0.0.0
# Uncomment the following lines for a production cluster deployment
#transport.host: 0.0.0.0
#discovery.zen.minimum_master_nodes: 1
network.host: 0.0.0.0
http.port: 9200
I can currently get my indexes through curl -XGET commands. The thing here is that I wanted to be able to do http requests to this ES instance using it's Ip Address instead of the 'localhost:9200' setting starting from my machine (Mac OS X).
So, what I've tried already:
1) I've tried doing it in Postman getting the following response:
Could not get any response
There was an error connecting to X.X.X.X:9200/.
Why this might have happened:
The server couldn't send a response:
Ensure that the backend is working properly
Self-signed SSL certificates are being blocked:
Fix this by turning off 'SSL certificate verification' in Settings > General
Client certificates are required for this server:
Fix this by adding client certificates in Settings > Certificates
Request timeout:
Change request timeout in Settings > General
2) I also tried in Sense (Plugin for Chrome):
Request failed to get to the server (status code: 0):
3) Running a curl from my machine's terminal won't do it too.
What am I missing here?
Docker for Mac provides a DNS name you can use:
docker.for.mac.localhost
You should use the value specified under container name in the YML file to connect to your cluster. Example:
services:
elasticsearch:
container_name: 'example_elasticsearch'
image: 'docker.elastic.co/elasticsearch/elasticsearch:6.6.1'
In this case, elastic search is located at http://example_elasticsearch:9200. Note that example_elasticsearch is the name of the container and may be used the same way as machine name or host name.
I am trying to setup elasticsearch on a single host. Here is how my configuration looks like:
elasticsearch.yml
node.name: ${HOSTNAME}
network.host: _site_, _local_
http.port: 9200
transport.tcp.port: 9300
cluster.name: "test_cluster"
node.local: true
kibana.yml
server.host: 0.0.0.0
elasticsearch.url: http://localhost:9200
On following command:
curl -XGET 'localhost:9200/_cluster/health?pretty'
I get following message:
{
"error" : {
"root_cause" : [
{
"type" : "master_not_discovered_exception",
"reason" : null
}
],
"type" : "master_not_discovered_exception",
"reason" : null
},
"status" : 503
}
In log file I see following message:
not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again
Could someone please point me right direction here?
I spent days (sigh) on basically this. I was trying to upgrade my single node cluster from 6.x es to 7.x, and I kept dying on the hill of "master_not_discovered_exception".
What finally solved it for me was examining a completely fresh install of 7.x.
For my single-node cluster, my /etc/elasticsearch/elasticsearch.yml needed the line:
discovery.type: single-node
I hope that saves someone else from spending days like I did. In my defence, I'm very new to es.
Following changes in eleasticsearch.yml worked for me
Node
node.name: node-1
Paths
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
Network
network.host: 0.0.0.0
http.port: 9200
Discovery
discovery.seed_hosts: "127.0.0.1:9300"
cluster.initial_master_nodes: ["node-1"]
Steps:
sudo systemctl stop elasticsearch.service
sudo nano /etc/elasticsearch/elasticsearch.yml
Apply above changes, save and close the editor.
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service
sudo systemctl start elasticsearch.service
NOTE: elasticsearch7 is used with ubuntu 20.04, it should work with other versions as well
According to this link, for the very first time you should set initial_master_nodes config like this in /etc/elasticsearch/elasticsearch.yml:
node.name: "salehnode"
cluster.initial_master_nodes:
- salehnode
On my side, after installing ES 7+, I had to set node-1 as a master-eligible node in the new cluster.
sudo nano /etc/elasticsearch/elasticsearch.yml
cluster.initial_master_nodes: ["node-1"]
First I dont think you need network.host setting for this.
In your log it is trying to get a master, but result is 0
Can you try setting properties like:
node.master: true
node.data:true
Also can you please put more logs here, if it doesnt work
You can try this in your elasticsearch.yml:
discovery.zen.minimum_master_nodes: 1
I think ES will always find discovery.zen.ping.unicast.hosts until it
finds nodes of count equal to minimum_master_nodes.
I install elasticsearch in aws ubuntu 14.04
install document
i change some settings in elasticsearch.yml
elasticsearch.yml
#network.bind_host: 192.168.0.1
to
network.bind_host: localhost
the document told me localhost is good at secure
when I start elasticsearch
sudo service elasticsearch restart
* Stopping Elasticsearch Server [ OK ]
* Starting Elasticsearch Server
and when I send curl
sudo curl -X GET 'http://localhost:9200'
curl: (7) Failed to connect to localhost port 9200: Connection refused
so I change network.bind_host
network.bind_host: 0.0.0.0
and
sudo curl -X GET 'http://localhost:9200'
{
"status" : 200,
"name" : "****",
"cluster_name" : "elasticsearch_***",
"version" : {
"number" : "1.7.2",
"build_hash" : "e12r1r33r3fs593f7202acbd816e8ec",
"build_timestamp" : "2015-09-14T09:49:53Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
but I think 0.0.0.0 is so danger when i product my web site
please somebody help me?
My guess is that you should be having your network.host set to localhost or 127.0.0.1 within your elasticsearch yml.
network.host: localhost <-- try 127.0.0.1 as well
Normally the network.bind_host binds to your network.host by default. You could have a look at this maybe for more details.
And just in case, try adding the http.port as well, so that you could make sure that you could access the ES port, which could look something like this.
http.port: 9200
Hope it helps!
I am building a cluster with elasticsearch. I download the elasticsearch file as a zip file and unzip it in the /opt file. And these are the two IPs I am using for trial, 172.16.30.51 and 172.16.30.52.
I have come across with some problems. I have tried to amend the host files and add server IP.
sudo vi /etc/hosts
172.16.30.51 elasticnode01
172.16.30.52 elasticnode02
Also, in Server elasticnode01 :
cd /opt/elasticsearch
vi config/elasticsearch.yml
I amend the following code.
cluster.name: mycluster
node.name: "elasticnode01"
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["elasticnode02"]
In Server elasticnode02 :
cd /opt/elasticsearch
vi config/elasticsearch.yml
I amend the following code.
cluster.name: mycluster
node.name: "elasticnode02"
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["elasticnode01"]
Then finally I run the command
bin/elasticsearch &
It seems fine but as soon as I run
curl 'localhost:9200/_cat/nodes?v'
It returns
host ip heap.percent ram.percent load node.role master name
127.0.0.1 127.0.0.1 4 39 0.20 d * elasticnode01
Would anyone mind telling me what is the problem? Thanks.
Since ES 2.0, the ES server binds to localhost by default, so they won't be able to discover each other.
You need to configure network.host on both servers, like this:
On elasticnode01:
network.host: elasticnode01
On elasticnode02:
network.host: elasticnode02