after install elasticsearch 7.12.1
with this config
network.host: 127.0.0.1
http.port: 9200
discovery.seed_hosts: ["127.0.0.1", "[::1]"]
in main page of elasticsearch show json
{
"name" : "master",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "_na_",
"version" : {
"number" : "7.12.1",
"build_flavor" : "default",
"build_type" : "zip",
"build_hash" : "3186837139b9c6b6d23c3200870651f10d3343b7",
"build_date" : "2021-04-20T20:56:39.040728659Z",
"build_snapshot" : false,
"lucene_version" : "8.8.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
and in xpack license is null
you should add
node.name: master
cluster.initial_master_nodes: ["master"]
to your elasticsearch.yml file and restart elasticsearch service
Related
{
"name" : "DESKTOP-BNTBBBG",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "DlRfjfPlRg69wUBRfy3kKQ",
"version" : {
"number" : "8.1.3",
"build_flavor" : "default",
"build_type" : "zip",
"build_hash" : "39afaa3c0fe7db4869a161985e240bd7182d7a07",
"build_date" : "2022-04-19T08:13:25.444693396Z",
"build_snapshot" : false,
"lucene_version" : "9.0.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
But When running this command
php bin/magento setup:install --base-url="http://versionup.magento.com" --db-host="localhost" --db-name="magento2" --db-user="magento2" --db-password="magento2" --admin-firstname="admin" --admin-lastname="admin" --admin-email="user#example.com" --admin-user="admin" --admin-password="Admin#123456" --language="en_US" --currency="USD" --timezone="America/Chicago" --use-rewrites="1" --backend-frontname="admin" --search-engine=elasticsearch7 --elasticsearch-host="localhost" --elasticsearch-port=9200
Could not validate a connection to Elasticsearch. Unknown 401 error from Elasticsearch null
i have the same problem, but now it's fixed.
make sure you enable xpack.security.http.ssl on elasticsearch.yml. because I disable it before.
then change the host to:
--elasticsearch-host="https://localhost"
then add more options to your command for authentication:
--elasticsearch-enable-auth=true --elasticsearch-username="elastic" --elasticsearch-password="your password"
I changed JVM heap sized and Logstash cant connect to Elasticsearch, after restarting Elasticsearch
logstash.yml
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: ["es:9200"]
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: pass
logs by Logstash
[WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://es:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://es:9200/'"}
curl from logstash to es with same credentials
logstash: curl -ulogstash_system es:9200
Enter host password for user 'logstash_system':
{
"name" : "elasticsearch",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "",
"version" : {
"number" : "7.5.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "",
"build_date" : "2019-11-26T01:06:52.518245Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Just installed ElasticSearch on my mac (brew install elasticsearch).
Then started it with: brew services start elasticsearch.
I can see it is running:
curl localhost:9200
{
"name" : "84ucGje",
"cluster_name" : "elasticsearch_dnk306",
"cluster_uuid" : "yU6wdZpJT_-1OvaFf7l7Ug",
"version" : {
"number" : "6.5.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "816e6f6",
"build_date" : "2018-11-09T18:58:36.352602Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
When I try to add some index to it - it fails with 406 error:
curl -XPOST localhost:9200/test/1 -d '{"title":"Hello world"}'
{"error":"Content-Type header [application/x-www-form-urlencoded] is not supported","status":406}
What do I need to do to make this work?
Thanks.
I installed oracle-jdk8 and elasticsearch on a ec2 instance and created an ami out of it. In the next copy of the ec2 machine i just changed the node name in elasticsearch.yml
However both the nodes if run individually are running.[NOTE the node id is appearing as same] But if run simultaneously, the one started later is failing with following in the logs:
[2018-08-07T16:35:06,260][INFO ][o.e.d.z.ZenDiscovery ] [node-1]
failed to send join request to master
[{node-2}{uQHBhDuxTeWOgmZHsuaZmA}{akWOcJ47SZKpR_EpA2lpyg}{10.127.114.212}{10.127.114.212:9300}{aws_availability_zone=us-east-1c, ml.machine_memory=66718932992, ml.max_open_jobs=20,
xpack.installed=true, ml.enabled=true}], reason
[RemoteTransportException[[node-2][10.127.114.212:9300][internal:discovery/zen/join]];
nested: IllegalArgumentException[can't add node
{node-1}{uQHBhDuxTeWOgmZHsuaZmA}{Ba1r1GoMSZOMeIWVKtPD2Q}{10.127.114.194}{10.127.114.194:9300}{aws_availability_zone=us-east-1c, ml.machine_memory=66716696576, ml.max_open_jobs=20,
xpack.installed=true, ml.enabled=true}, found existing node
{node-2}{uQHBhDuxTeWOgmZHsuaZmA}{akWOcJ47SZKpR_EpA2lpyg}{10.127.114.212}{10.127.114.212:9300}{aws_availability_zone=us-east-1c, ml.machine_memory=66718932992, xpack.installed=true,
ml.max_open_jobs=20, ml.enabled=true} with the same id but is a
different node instance];
My elasticsearch.yml:
cluster.name: elk
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
network.publish_host: _ec2:privateIp_
transport.publish_host: _ec2:privateIp_
discovery.zen.hosts_provider: ec2
discovery.ec2.tag.ElasticSearch: elk-tag
cloud.node.auto_attributes: true
cluster.routing.allocation.awareness.attributes: aws_availability_zone
Output from _nodes endpoint:
//----Output when node-1 is run individually/at first----
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elk",
"nodes" : {
"uQHBhDuxTeWOgmZHsuaZmA" : {
"name" : "node-1",
"transport_address" : "10.127.114.194:9300",
"host" : "10.127.114.194",
"ip" : "10.127.114.194",
"version" : "6.3.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "053779d",
"roles" : [
"master",
"data",
"ingest"
],
"attributes" : {
"aws_availability_zone" : "us-east-1c",
"ml.machine_memory" : "66716696576",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20",
"ml.enabled" : "true"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 3110,
"mlockall" : true
}
}
}
}
//----Output when node-2 is run individually/at first----
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elk",
"nodes" : {
"uQHBhDuxTeWOgmZHsuaZmA" : {
"name" : "node-2",
"transport_address" : "10.127.114.212:9300",
"host" : "10.127.114.212",
"ip" : "10.127.114.212",
"version" : "6.3.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "053779d",
"roles" : [
"master",
"data",
"ingest"
],
"attributes" : {
"aws_availability_zone" : "us-east-1c",
"ml.machine_memory" : "66718932992",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20",
"ml.enabled" : "true"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 4869,
"mlockall" : true
}
}
}
}
Solved this by deleting rm -rf /var/lib/elasticsearch/nodes/ in every instance and restarting elasticsearch.
I installed elasticsearch as RPM service.
I have 5 cluster nodes in which one is tribe node, three are master and data node and final one is data node. I pointed logstash to that data node and it was successfully indexded. When I just checked in other boxes the indexed information is not shared among the clusters.
When I checked the url xxx.xx.xx:9200/. I am getting new field called cluster_uuid whcich I didn't see in my previous version of ELK(2.3.1).. I am suspecting due to that different cluster_uuid.
node1
{
"name" : "node2",
"cluster_name" : "cluster_sft",
"cluster_uuid" : "wwbCT_wJRryT6ko-g_lm7A",
"version" : {
"number" : "5.2.1",
"build_hash" : "db0d481",
"build_date" : "2017-02-09T22:05:32.386Z",
"build_snapshot" : false,
"lucene_version" : "6.4.1"
},
"tagline" : "You Know, for Search"
}
node 2:
{
"name" : "node3",
"cluster_name" : "cluster_sft",
"cluster_uuid" : "qq3YUgZGR9O7nBXTLe8SUg",
"version" : {
"number" : "5.2.1",
"build_hash" : "db0d481",
"build_date" : "2017-02-09T22:05:32.386Z",
"build_snapshot" : false,
"lucene_version" : "6.4.1"
},
"tagline" : "You Know, for Search"
}
Here is my elasticsearch config
cluster.name: cluster_sft
node.name: node1
node.master: true
node.data: true
bootstrap.memory_lock: true
http.host: xx.xx.xxx.xxx
network.host: 127.0.0.1
http.port: 9200
discovery.zen.ping.unicast.hosts: ["xx.xx.xxx.xxx:9200","xx.xx.xxx.xxx;9200","xx.xx.xxx.xxx:9200","xx.xx.xxx.xxx:9200"]
discovery.zen.minimum_master_nodes: 1
I am not sure whether I am missing any attributes or I am going wrong in configuration.
Thanks,
Rajeshkumar