I have ES cluster setup with 3 master and 2 data node and running properly. I want to change one of the data node data and log location from local to external disk
In my current YAML file
path.data: /opt/elasticsearch/data
path.logs: /opt/logs/elasticsearch
Now I added 2 external disk to my server to store data/logs and would like to change the location to the new drives
I have added the new disk. What is correct process to point ES data/log to the new disk
The data on this node can be deleted as this is a dev env.
Could I just stop the ES on this server
delete the info in the current data and log folder
mount the new drive to the same mount point and restart the cluster
Thanks
You could just change the settings in YAML file and restart the elasticsearch service, it should work for you. There is no automatic reload when you change any YAML configuration.
Steps :
change Path in YAML
Restart the service
Related
The problem is that Filebeats is sending duplicated logs to Elasticsearch, when I restart Filebeats, he sends the whole log again.
I have been mounting /var/share/filebeat/data to the container where I am runnig Filebeats. I also had change the permissions of the share directory, to be owned by the filebeats user.
I am using Elasticsearch 8.1.2
The most probable reason for this is persistent volume location for filebeat registry. Essentially, filebeat creates a registry to keep track of all log files processed and to what offset. If this registry is not stored on a persistent location (for instance stored to /tmp) and filebeat is restarted, the registry file will be lost and new one will be created. This tells filebeat to tail all the log files present at specified path from beginning, hence the duplicate logs.
To resolve this, please mount a persistent volume to filebeat (may be hostpath) and configure it to be used for storing registry.
Thanks for the answers, but the issue was that in the initial setup we didn't define an ID tag for the filestream input type. As simple as that.
https://www.elastic.co/guide/en/beats/filebeat/current/_step_1_set_an_identifier_for_each_filestream_input.html
I have taken Cassandra backup from one cluster and going to restore to another(new) cluster with same configuration.
1) I have copied backup to each node in cluster (different node with different server backup )
2) Copied data into correct location in data path
But when I log into CQLSH, data does not display. Restarting the nodes also did not work.
For restoring clone cluster, you will have to export tokens from all Cassandra nodes and put them into new cassandra.yaml on each node. copy all the sstable per node after schema creation and start Cassandra services.
I am using graylog1.4 and elasticsearch 2.3,
I would like to change the location of (cluster indexes) -> /var/lib/elasticsearch/graylog2/nodes/0/indices/graylog2_0/0/index/ -> to an attached storage (like I have SAN storage which is mounted as /data), please suggest where to make changes in configuration to achieve it because this /var/lib/elasticsearch/graylog2 have consumed almost all local disk.
Thanks.
You can change the location of the Elasticsearch indices on disk using the path.data configuration setting: https://www.elastic.co/guide/en/elasticsearch/reference/2.3/setup-configuration.html#paths
How to rename the current cluster in elasticsearch config?
i want to rename the cluster without it going down if possible.
Make edits in the elasticsearch.yml file. By default the es cluster name is elasticsearch and the cluster.name field in the yml file is commented out. So first uncomment it, then give a name and restart es.
If you are having multi nodes cluster means, you can try updating cluster names in config file & directory name (if replicas enabled) one by one nodes; which is similar to rolling upgrade of the Elasticsearch.
if you are using single node cluster means, you can attempt changing the cluster name in config file but restart of cluster will be needed to take effect change.
I am building my own docker image for the elasticsearch application.
One question I have : will the configuration file elasticsearch.yml be modified by the application on the fly?
I hope that will never happen even if the node is running in cluster. But some other application (like redis), they modify the config file on the fly when cluster status changes. And if the configuration file changes on the fly, I have to export it as volumn since docker image can not retain the changes done on the fly
No, you don't run any risk of overwriting your configuration file. The configuration will be read from that file and kept in memory. ES also allows to persistently change settings at runtime, but they are stored in another global cluster state file (in data/CLUSTER_NAME/nodes/N/_state, where N is the 0-based node index) and re-read on each restart.