I want to setup sort of aggregation for multiple Elasticsearch clusters based on Cross Cluster Search feature.
I have the following layout:
As seed for Cross Cluster Search I am using the only available via network cluster address.
After querying I am getting error:
[elasticsearch][172.16.10.100:9300] connect_timeout[30s]
I can't change publish_host for nodes, because that address used inside the cluster for node communication.
Is there any option to force Cross Cluster Search to use only provided address?
Or any other way to setup kinda proxy for user to be able to search/visualize in kibana data from multiple isolated elasticsearch clusters?
I believe that the only solution is to upgrade to Elasticsearch 7, which provides the cluster.remote.${cluster_alias}.proxy option where you can specify the incoming IP address for the cross cluster search.
Related
I have installed elastic search on my ubuntu system and it's working fine with default cluster.
But i want to create another cluster.
I have checked official document of elastic search but i haven't found any steps for create another or multiple cluster.
You need to update ES_HOME/config/elasticsearch.yml. Under the cluster section, change the cluster name parameter.
cluster.name: my_cluster
Default value for cluster name is elasticsearch
One instance of ES can be a part of only one cluster. If all ES instances / machines have the same cluster name, elasticsearch will form a cluster automatically as long as the machines are all on the same network
I'm trying to create Elasticsearch cluster (2,3 nodes) with version 2.3.5 on Ubuntu 16.04.
In some articles said that if all nodes in one network, need to specify same cluster name in all nodes then Elasticsearch itself will discover them and will add to cluster. I did it but with no results.
Could you help me, is this feature works on 2.3.5?
or need to specify:
discovery.zen.ping.unicast.hosts: ["10.0.0.1", "10.0.0.2", "10.0.0.3"]
Thanks.
If Elasticsearch instances are in the same network they can discover each other and form a cluster. You can specify a cluster name as you say. The default discovery port is 9300. So ensure if that port is reachable between nodes.
Specifying is node address is also an option.
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html
I want to run logstash -> elasticsearch with high availability and cannot find an easy way to achieve it. Please review how I see it and correct me:
Goal:
5 machines each running elasticsearch united into a single cluster.
5 machines each running logstash server and streaming data into elasticsearch cluster.
N machines under monitoring each running lumberjack and streaming data into logstash servers.
Constraint:
It is supposed to be run on PaaS (CoreOS/Docker) so multi-casting
discovery does not work.
Solution:
Lumberjack allows to specify a list of logstash servers to forward data to. Lumberjack will randomly select the target server and switch to another one if this server goes down. It works.
I can use zookeeper discovery plugin to construct elasticsearch cluster. It works.
With multi-casting each logstash server discovers and joins the elasticsearch cluster. Without multicasting it allows me to specify a single elasticsearch host. But it is not high availability. I want to output to the cluster, not a single host that can go down.
Question:
Is it realistic to add a zookeeper discovery plugin to logstash's embedded elasticsearch? How?
Is there an easier (natural) solution for this problem?
Thanks!
You could potentially run a separate (non-embedded) Elasticsearch instance within the Logstash container, but configure Elasticsearch not to store data, maybe set these as the master nodes.
node.data: false
node.master: true
You could then add your Zookeeper plugin to all Elasticsearch instances so they form the cluster.
Logstash then logs over http to the local Elasticsearch, who works out where in the 5 data storing nodes to actually index the data.
Alternatively this Q explains how to get plugins working with the embedded version of Elasticsearch Logstash output to Elasticsearch on AWS EC2
I would like to know what is the best setup to have an elasticsearch data node and kibana server on separate machines. I have setup multiple elasticsearch data nodes and would like to show all dashboards on one server but not sure how to do that. I do not want to have different urls to view different dashboards. I set up the data nodes with logstash shipper on each machine so all I need now is to have kibana get data from each different data node. Is that possible?
I edited the config file for kibana as follows:
elasticsearch: "http://"192.168.xx.xxx":9200"
So far Kibana 3 do not support what you need. You can refer to this.
As the webpage say, maybe you want to set an proxy or a redirect page.
Would it be safe to share an elasticsearch cluster (or single-node elasticsearch cluster) between Logstash or graylog2 and my own application? what configuration changes/additions should be made for accomodating that? what kind of name-spacing would the application require for storing its own data in separation from graylog/Logstash?
I'd rather avoid maintaining separate clusters, especially on dev boxes but also in general - if the architecture allows.
It is technically possible but not recommended. You will experience load on the logging cluster that you want to decouple from the other applications using ES.
Graylog2 supports defining an index prefix for having multiple setups running in one ES cluster.
We have both(Kibana and Graylog) running with shared Elasticsearch. It's just that the indexing pattern is something different we have to add Circuit breakers in Elasticsearch so that Kibana search query for logs would not expand beyond a certain size.