I have taken Cassandra backup from one cluster and going to restore to another(new) cluster with same configuration.
1) I have copied backup to each node in cluster (different node with different server backup )
2) Copied data into correct location in data path
But when I log into CQLSH, data does not display. Restarting the nodes also did not work.
For restoring clone cluster, you will have to export tokens from all Cassandra nodes and put them into new cassandra.yaml on each node. copy all the sstable per node after schema creation and start Cassandra services.
Related
I have ES cluster setup with 3 master and 2 data node and running properly. I want to change one of the data node data and log location from local to external disk
In my current YAML file
path.data: /opt/elasticsearch/data
path.logs: /opt/logs/elasticsearch
Now I added 2 external disk to my server to store data/logs and would like to change the location to the new drives
I have added the new disk. What is correct process to point ES data/log to the new disk
The data on this node can be deleted as this is a dev env.
Could I just stop the ES on this server
delete the info in the current data and log folder
mount the new drive to the same mount point and restart the cluster
Thanks
You could just change the settings in YAML file and restart the elasticsearch service, it should work for you. There is no automatic reload when you change any YAML configuration.
Steps :
change Path in YAML
Restart the service
i have elasticsearch cluster running. as of now there is no backup enabled till now neither S3 or NAS. we want to upgrade elasticsearch cluster on a new servers and the data size is 100gb of 2 indexes.
since we dont have backup, can we copy the data stored directory from the running cluster on all three nodes to a new cluster will this work?
current running version of es:6.2.3 to es:6.3.4
please advice.
Thanks in advance.
Taking a copy of indices folder on the running system of your elasticsearch cluster and restoring on the new es-cluster working fine.
Thanks to #Andreas Volkmann
I have three master,slave1,salve2 cluser server of hadoop and My question is like if master server of ambari system failed then how can we recover ? Do we need to add new server and install ambari again or how can we recover our data from failed server. if added new server we can assign as master then how can we do ? could suggest me about master server down then how can resolve this issue ?
Thanks in advance.
No data retrieval of data if the Name Node dies and you have no backup. You need a backup Name Node (aka Secondary Name Node) which will take metadata backup after every fixed interval. This interval is generally long so u still lose some data
With hadoop 2.0 u can take more frequent backup with help of a passive name node which becomes active if the main name node dies and data is still accessible.
I want to delete datanode from my hadoop cluster, but don't want to lose my data. Is there any technique so that data which are there on the node which I am going to delete may get replicated to the reaming datanodes?
What is the replication factor of your hadoop cluster?
If it is default which is generally 3, you can delete the datanode directly since the data automatically gets replicated. this process is generally controlled by name node.
If you changed the replication factor of the cluster to 1, then if you delete the node, the data in it will be lost. You cannot replicate it further.
Check all the current data nodes are healthy, for these you can go to the Hadoop master admin console under the Data nodes tab, the address is normally something link http://server-hadoop-master:50070
Add the server you want to delete to the files /opt/hadoop/etc/hadoop/dfs.exclude using the full domain name in the Hadoop master and all the current datanodes (your config directory installation can be different, please double check this)
Refresh the cluster nodes configuration running the command hdfs dfsadmin -refreshNodes from the Hadoop name node master
Check the Hadoop master admin home page to check the state of the server to remove at the "Decommissioning" section, this may take from couple of minutes to several hours and even days depending of the volume of data you have.
Once the server is shown as decommissioned complete, you may delete the server.
NOTE: if you have other services like Yarn running on the same server, the process is relative similar but with the file /opt/hadoop/etc/hadoop/yarn.exclude and then running yarn rmadmin -refreshNodes from the Yarn master node
Can I use the snapshot and restore module of elastic search to export one index to all together to a new elastic search cluster.
For e.g
Can I export all the indices from development cluster to QA cluster?
Check out https://github.com/taskrabbit/elasticsearch-dump, we made it for exactly this purpose
Yes, you can use the snapshot & restore feature to backup indices from one cluster and move them to another cluster. Once you have created the snapshot on your development cluster, you can copy it over to your QA cluster and restore it.
The advantage over just performing a direct copy of the index folder is that you do not need to shutdown the clusters. The snapshot is a point in time capture of the indices state and can be run without taking the cluster offline.