Elastic data restore from S3 - elasticsearch

I have elasticsearch backup taken into S3. But I am not able to restore it using any of the commands mentioned below.
curl -XPOST http://localhost:9200/_snapshot/elasticsearch/snap-dev_1/_restore
curl -XPOST http://localhost:9200/_snapshot/snap-deliveryreports_june2016bk/elasticsearch/_restore
I can see the files in S3:
What is the command to restore the data shown in the image?
update:
The following command is successful (returns acknowleged: true)
It means access key, secret key, bucket name and region is correct.
curl -XPUT 'http://localhost:9200/_snapshot/s3_repository?verify=true&pretty' -d'
{
"type": "s3",
"settings": {
"bucket": "todel162",
"region": "us-east-1"
}
}'
I guess I only need to know how to use restore snapshot command.

You can use the cat recovery API to monitor your restore status, as restoring just piggybacks on the regular recovery mechanism of elasticsearch, so check if you see anything using those APIs.

Related

How to create index and type in elastic search?

I have installed elasticsearch version 2.3.2. I have to add index and type to that elasticsearch. Before I used sense plugin to achieve this. But the addon was removed from webstore. Please give suggestion.
Sense plugin is now a Kibana app. Please refer official reference for installation.
The answer of your question is, you can create index and type in Elasticsearch by running below curl command
curl -XPUT "http://localhost:9200/IndexName/TypeName"
You can use a Rest client like postman to do this. You can get the postman as a chrome extension.
The other way is to do an SSH into one of the nodes in your cluster and run the POST command using CURL.
`curl -X POST 'localhost:9200/bookindex/books' -H 'Content-Type: application/json' -d'
{
"bookId" : "A00-3",
"author" : "Sankaran",
"publisher" : "Mcgrahill",
"name" : "how to get a job"
}'
I will automatically create an index named 'bookindex' with type 'books' and index the data. If index and type already exist it will add the entry to the index.
All operations in Elasticsearch can be done via REST API calls.
To create an index use the index API
curl -XPUT 'localhost:9200/twitter?pretty' -H 'Content-Type: application/json' -d'{"settings" : {"index" : {"number_of_shards" : 3, "number_of_replicas" : 0 }}}'
To create the mapping the you can use the _mapping endpoint-
curl -XPUT http://localhost:9200/twitter/tweets/_mapping -d #"create_p4_schema_payload.json"
Here,mapping is provided via a json file name create_p4_schema_payload.json which contains the following-
{
"properties": {
"user_name": {
"type": "text"
}
}
}
All these can be run via any terminal which supports curl. For windows, you may install cygwin to run linux command from command prompt.
Like it was said above, you can access it through REST api calls. The command you need to run is:
curl -XPUT 'http://localhost:9200/IndexName?include_type_name=TypeName'
CURL is a raw text that can be imported into Postman, for example, or you can install it's CLI and simply run it. Simply put:
It's a PUT api call to the ElasticSearch/IndexName, adding the Query Parameter include_type_name.
The reference guide is at: Elastic Search - Create index API
Sense plugin is removed from chrome webstore. You could use Kibana which has sense like dev-tool to perform ElasticSearch queries.
Follow this link to install kibana.

Deleting a type in Elastic Search using curl

I am trying to delete a type in elastic search using curl script in bat file
ECHO Running Curl Script
curl -XDELETE "http://localhost/testing/" -d''
pause
The response that i got was No handler found for uri . I looked into documentation of Elastic Search and it says to use delete by query https://www.elastic.co/guide/en/elasticsearch/reference/5.0/docs-delete-by-query.html
How can i modify the my curl script to use this new api for ES 2.3
Thanks
If you want to use the delete-by-query API to delete all documents of a given type, you can do it like this:
curl -XDELETE "http://localhost/testing/_query?q=_type:typename"
However, you're better off deleting the index and recreating it so you can modify the mapping type as you see fit.
curl -XDELETE "http://localhost/testing/"
curl -XPUT "http://localhost/testing/" -d '{"settings": {...}, "mappings": {...}}'

How to bulk create (export/import) indices in elasticsearch?

I'm trying to upgrade our ELK stack from 1.x > 5.x following the re-index from remote instructions. I'm not sure of how to export a list of the indices that I need to create and then import that list into the new instance. I've created a list of indices using this command, both with "pretty," and without, but I'm not sure which file format to use as well as what to next do with that file.
The create index instructions don't go into how to create more than one at a time, and the bulk instructions only refer to creating/indexing documents, not creating the indices themselves. Any assistance on how to best follow the upgrade instructions would be appreciated.
I apparently don't have enough reputation to link the "create index" and "bulk" instructions, so apologies for that.
With a single curl command you could create an index template that will trigger the index creation at the time the documents hit your ES 5.x cluster.
Basically, this single curl command will create an index template that will kick in for each new index created on-the-fly. You can then use the "reindex from remote" technique in order to move your documents from ES 1.x to ES 5.x and don't worry about index creation since the index template will take care of it.
curl -XPUT 'localhost:9200/_template/my_template' -H 'Content-Type: application/json' -d'
{
"template": "*",
"settings": {
"index.refresh_interval" : -1,
"index.number_of_replicas" : 0
}
}
'
Was able to accomplish this with a formatted list of indices created via an index list fed through sed, then feeding that file through the following script:
#! /bin/bash
while read some_index; do
curl -XPUT "localhost:9200/$some_index?pretty" -d'
{
"settings" : {
"index" : {
"refresh_interval" : -1,
"number_of_replicas" : 0
}
}
}'
sleep 1
done <$1
If anyone can point me in the direction of any pre-existing mechanisms in Elasticsearch, though, please do.

How to delete all documents from an elasticsearch index

I am trying to delete all document from my index and getting the following error on CURL. No handler found for uri [/logstash-2016.03.11/logevent/] and method [DELETE]
Here is my delete command on Windows command.
curl -XDELETE "http://localhost:9200/logstash-2016.03.11/logevent/"
can anybody help?
curl -XPOST "http://localhost:9200/logstash2016.03.11/logevent/_delete_by_query" -d'
{
"query":{
"match_all":{}
}
}'
The delete-by-query API is new and should still be considered
experimental. The API may change in ways that are not backwards
compatible
https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html
You cannot delete a type from an index by executing a delete on the type.
To solve your problem, you have 2 solutions.
If you only have a single type in your logstash index, just execute curl -XDELETE "http://localhost:9200/logstash-2016.03.11. It will delete the old index, but logstash will recreate it when it'll process the next event
You install the delete by query plugin ( https://www.elastic.co/guide/en/elasticsearch/plugins/2.2/plugins-delete-by-query.html ) and run something like this :
curl -XDELETE /logstash-2016.03.11/logevent/_query -d '
{
"query": { "match_all": {}}
}'

Kibana: Cant import Shakespeare.json on Sense Web Plugin

I am trying to import shakespeare.json as per elastic search tutorial.
[Environment]
Elastic Search 2.1
Sense -Extension for Chrome
[Background]
When I paste curl -XPUT localhost:9200/_bulk --data-binary #shakespeare.json
on the sense tab(the extensions opens a new tab with 2 windows)
It's converted to PUT /_bulk and the output is
{
"error": {
"root_cause": [
{
"type": "parse_exception",
"reason": "Failed to derive xcontent"
}
],
"type": "parse_exception",
"reason": "Failed to derive xcontent"
},
"status": 400
}
[My Findings]
I have downloaded shakespeare.json locally, but I think Sense is not
able to locate the path where the file resides(May be, I have the file at incorrect location)
I also tried finding the current directory for Sense, but I am not sure where would I find the index.html for chrome-extension in windows for this plug-in.
Whatever documentation I found, is Linux specific.
Any inputs appreciated.
You should not do this in Sense, but simply from the command line
curl -XPUT localhost:9200/_bulk --data-binary #shakespeare.json
Make sure to point to the correct path of the shakespeare.json file if it is not located in the directory you're in.
curl -XPUT localhost:9200/_bulk --data-binary #/path/to/shakespeare.json
UPDATE
If you run on Windows and you don't have the cURL command present, you can download cURL at https://curl.haxx.se/download.html
In Latest ElasticSearch 6, to populate the sharespeare_6.0.json, the following is the curl command
curl -H Content-Type:application/x-ndjson -XPUT localhost:9200/shakespeare/doc/_bulk --data-binary #shakespeare_6.0.json

Resources