snapshot_restore_exception migrating ElasticSearch 7 indices between servers - elasticsearch

We want to migrate an ElasticSearch v7 index from server 1 to server 2.
How can we copy and restore a snapshot from one server to another?
The offical documentation seems to assume both servers share the same physical storage. These servers do not "know" each other and have completely seperate disks. In our case we:
Create the same repository foo on both servers
Run a snapshot foo1 on server 1
Create the /snapshots/foo directory on server 2
Verify the repo on server 2
Run a snapshot footest on server 2 to verify files are written to /snapshots/foo
Copy across the /snapshots/foo directory from server 1 to server 2
Run the restore of foo1:
POST /_snapshot/foo/foo1/_restore
{
"indices": "foo"
}
The error is:
{
"error": {
"reason": "[foo:foo1] snapshot does not exist",
"root_cause": [
{
"reason": "[foo:foo1] snapshot does not exist",
"type": "snapshot_restore_exception"
}
],
"type": "snapshot_restore_exception"
},
"status": 500
}
When we check the repo:
GET /_snapshot/foo/*
We can see only one snapshot (which we created on server 2). We cannot see the snapshot we copied across from server 1.
So how does one transfer a snapshot to server 2?

#Marc there are few options for you to copy data from one server to another in terms of indices:
Configure CCR (cross cluster replication) for your index and let it sync between the two servers (which can be on separate machines. network/ etc... they should just be able to communicate)
You can have a single repository created which can be accessed by both servers (using shared IAM user or role assuming it's on AWS). You can then take snapshot of your index from server1 and restore the snapshot in server2.
Not recommended but you can copy ${path.data} files from server1 to ${path.data} in server2.
However, the main question here would be: are your server1 and server2 running on same ES version ?

Related

Connect Sonarqube 6.7 to external Elasticsearch

I've been using Sonarqube with its embedded database for demos. Now, I need to connect it to an external Elasticsearch instance to meet the requirements of a production environment.
Which configurations I have to add on the elasticsearch.yml and sonar.properties?
For the move to production, you don't need to, and shouldn't try to connect to an external Elasticsearch instance. SonarQube starts up and manages its own instance internally.
What you do need to do is connect to an external database, and that's easily done by setting the correct properties in $SONARQUBE_HOME/conf/sonar.properties
I succeed to use a external ElasticSearch with latest sonarqube 8.9. But it's just a hack at your own risk.
Steps
Create a elastic search server
First start a elastic search instance anywhere.
Modify the config files
Modify the file
cat >> conf/sonar.properties < EOF
# your external host and port
sonar.search.port=9200
sonar.search.host=192.168.xx.xx
EOF
# create a dummy run script
cat > elasticsearch/bin/elasticsearch << EOF
#!/bin/bash
# it's a inflate sleep
cat
EOF
Run sonarqube
just start sonarqube and view indexs in your new elasticsearch.

Create large number of Linux machines failed with provisioning state `Canceled`

I'm trying to deploy large number of linux machines using the azure-cli (v 2.0.7) using this bash script:
#!/usr/bin/env bash
number_of_servers=12
for i in `seq 1 1 ${number_of_servers}`;
do
az vm create --resource-group Automationsystem --name VM${i} --image BaseImage --admin-username azureuser --size Standard_F4S --ssh-key-value ~/.ssh/mykey.pub &
done
The machines are created from a custom image.
When I ran it I got the following error :
The resource operation completed with terminal provisioning state 'Canceled'.The operation has been preempted by a more recent operation
I tried to create less machines but the error still exists.
I looked at this question but it does not solved my problem.
can I create the machines from custom image?
Yes, we can use custom image to create Azure VMss.
We can use this template to deploy create vmss with custom image.
"sourceImageVhdUri": {
"type": "string",
"metadata": {
"description": "The source of the blob containing the custom image, must be in the same region of the deployment."
}
},
We should store image to Azure storage account first.

Restore snapshot to another DC server using elasticsearch-curator

I need to keep elasticsearch-data in sync within 3 server using elasticsearch-curator. All I want to update data on one server and others update themselves using snapshot and restore method.
I was able to create snapshot using curator on first server but couldn't restore it on another.
Snapshot
While taking snapshot Host entry in curator.yml is like hosts: ["localhost"] on Server 1. I can easily restore it on Server 1 itself.
But, the problem arise when I try to restore it on Server 2
Host entry in curator.yml is like hosts: ["localhost","Server 1 IP"]
It generates error message:
2017-02-27 10:39:58,927 INFO Preparing Action ID: 1, "restore"
2017-02-27 10:39:59,145 INFO Trying Action ID: 1, "restore": Restore all indices in the most recent curator-* snapshot with state SUCCESS. Wait for the restore to complete before continuing. Do not skip the repository filesystem access check. Use the other options to define the index/shard settings for the restore.
2017-02-27 10:39:59,399 INFO Restoring indices "['test_sec']" from snapshot: curator-20170226143036
2017-02-27 10:39:59,409 ERROR Failed to complete action: restore. <class 'curator.exceptions.FailedExecution'>: Exception encountered. Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. Exception: TransportError(500, u'snapshot_restore_exception', u'[all_index:curator-20170226143036]snapshot does not exist')
This is kind of related to the answer at how to restore elasticsearch indices from S3 to blank cluster using curator?
How did you add the repository to the original (source) cluster? You need to use the exact same steps to add the repository to the new (target) cluster. Only then will the repository be readable by the new cluster.
Without more information, it's harder to pinpoint, but the snapshot does not exist message seems clear in this regard. It indicates that the repository is not the same shared file system as the source cluster.

Setting up a three tier environment in puppet

These are my files:
Nodes.pp file
site.pp file
I need to setup the infrastructure in the diagram, and I would like to use Puppet Automation in order to do so. I would need to, 
Create 4 VMs, one for DB, 1 web server, 1 load balancer, 1 master
Set them up with Puppet Agent
Find the appropriate modules/cookbooks from the community site
(Puppet Forge/ Chef Supermarket)
Configure the nodes using recipes/classes fetched from the community
sites.
Provide configuration parameters in order to have all these nodes
connect to each other.
 
End goal is to have a working Wordpress setup.
I got stuck with the master agent configuration process. I have a Puppet master and 3 agents up and running. But, but whenever I run #puppet agent --test in the agent, It throws an error. I look forward to the community's help.
The error I am getting is...
[root#agent1 vagrant]# puppet agent --noop --test
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
First take a look at the puppet master logs.
Second: The error message is to short. There is missing something after the
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could The text after the "Could" can be helpful ;)

weed-fs in Linode running master

I'm using weed-fs 0.7 beta.
I'm having an issue where the master server always does not have any free volume servers while i have 2 of them.
I have 2 servers in Linode, i used one of them to create a master, volume and filer server using this command.
./weed server -ip.bind="192.168.139.166" -master.port=9333 -volume.port=8080 -volume.max="7" -dir="./data" -master.dir="./master" -filer=true -filer.dir="./filer"
The 3 systems starts properly. But when i check the master server using this command:
curl "http://{IP-ADDRESS}:9333/dir/status?pretty=y"
This is the result:
{
"Topology": {
"DataCenters": null,
"Free": 0,
"Max": 0,
"layouts": null
},
"Version": "0.70 beta"
}
I can add in file into the volume server directly using this:
curl -F file=help.txt http://{IP-ADDRESS}:8080/3,01637037d6
When i attempt to add the above file, this is the response on the console of the server:
I0512 08:30:06 20079 store.go:346] volume 3 size 20 will exceed limit 0
I0512 08:30:06 20079 store.go:348] error when reporting size: No master node available!
My best guess is that somehow the Master server does not seems to be able to detect the volume server, while both of them are on the same server.
I tried using my 2nd server to run volume server and point the master server's IP address using private IP, it does not work either.
But it seems like the volume servers are able to work without the master server.
Use -ip="192.168.139.166", instead of -ip.bind

Resources