where do i put indices.* settings in elasticsearch cluster - elasticsearch

I am new in handling elasticsearch cluster .
My question is where to put indices. settings.
In
Data nodes
Both master and data
All the nodes
For eg
indices.memory.index_buffer_size : 20%
On which all nodes i need to add this setting?

Related

Can you run an elasticsearch data node after deleting the data folder?

I am running a three node Elasticsearch (ELK) cluster. All nodes have all and the same roles, e.g. data, master, etc. The disk on node 3 where the data folder is assigned became corrupt and that data is probably unrecoverable. The other nodes are running normally and one of them assumed the master role instead.
Will the cluster work normally if I replace the disk and make the empty directory available to elastic again, or am I risking crashing the whole cluster?
EDIT: As this is not explicitly mentioned in the answer, yes, if you add your node with an empty data folder, the cluster will continue normally as if you added a new node to the cluster, but you have to deal with the missing data. In my case, I lost the data as I do not have replicas.
Let me try to explain that in simple way.
Your data got corrupt at node-3 so if you add that that node again, it will not have the older data, i.e. the shards stored in node-3 will remain unavailable for the cluster.
Did you have the replica shards configured for the indexes?
What is the current status(yellow/red) of the cluster when you have
node-3 removed?
If a primary shard isn't available then the master-node promotes one of the active replicas to become the new primary. If there are currently no active replicas then status of the cluster will remain red.

which node of Elasticsearch cluster (master , data, ingest ) gather data from logstash?

I have 3 nodes of Elasticsearch in my cluster. how they connect to each other and how to set output filter of my logstash to send data to ES cluster(actually which node is responsible for gathering data)?
actually Logstash send data to cluster and you can check it from /etc/logstash/conf.d/*. ingest node is responsible for indexing documents on cluster. by default all nodes are ingest. you can have dedicated ingest node but with 3 nodes you don't need.

Elasticsearch - Adding node without replication?

I have a master/data Elasticsearch node. It has now reached 90% capacity and I need to provision additional space to continue adding more data.
I have created a new server with 700gb disk space, installed ES & Kibana, and now wish for this second server to provide additional space to / work with the master node.
My problem:
As it says on the ES website:
When you add more nodes to a cluster, it automatically allocates
replica shards.
My issue is that I do not wish to replicate the data from the master node, but instead just provide additional space using this second server which can then be queried by the master node.
My question:
What is the best way to achieve this? Is adding a node the incorrect thing to do here?
Using index-level shard allocation filtering, you can constrain a given index (or set of indexes) to stay on a given node (or set of nodes).
Simply run this:
PUT orders,orders_1,orders_2,orders_3,orders_4,orders_5/_settings
{
"index.routing.allocation.require._name": "your-first-node-name"
}
Note that you can also use ._ip or ._host instead of ._name if you prefer.
Then you can add a new node and let it join the cluster and nothing will rebalance, all your current shards will stay on your current node.
And if you need to create a new index on the second node and want to make sure that it will stay on that node you can specify the same settings at index creation time:
PUT new_orders
{
"settings": {
"index.routing.allocation.require._name": "your-second-node-name"
}
}
The index called new_orders will be created on the second node and stay there.

Adding cluster to existing elastic search in elk

Currently I have existing
1. Elastic search
2. Logstash
3. Kibana
I have existing data on them.
Now i have setup ELK cluster with 3 Master nodes , 5 data nodes 3 client nodes.
But i am not sure how can i get existing data into them.
Is it possible that if i make the existing ES node as data node and then attach it to the cluster . Then will that data gets replicated to other data nodes as well? and then take that node offline
Option 1
How about just try with fewer nodes? It is not hard to test if it is supported if you setup one node, feed some data, and add one more and configure them as a cluster to see if data get synchronized.
Option 2
Another option is to use an elasticsearch migration tool like https://github.com/taskrabbit/elasticsearch-dump, basically, you could setup a clean cluster and migrate all your data in old node to this cluster.

Change Index Template Across Elasticsearch Cluster

I'm running ES cluster with multiple on AWS. The cluster is behind an ELB that is fed by logstash. I've tested a new mapping template in a sandbox cluster, but with only one node.
I've read over these two articles:
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html
https://www.elastic.co/blog/changing-mapping-with-zero-downtime
When changing the template does each node need to have an updated copy? Can the new template be fed to the master and then it will share the template?
You can add/update your template using the _template endpoint and all master-eligible nodes will record it.
However, if you decide to store that template within a file (in the config/templates folder), then you have to store that file on all master-eligible nodes.

Resources