How to bulk create (export/import) indices in elasticsearch? - elasticsearch

I'm trying to upgrade our ELK stack from 1.x > 5.x following the re-index from remote instructions. I'm not sure of how to export a list of the indices that I need to create and then import that list into the new instance. I've created a list of indices using this command, both with "pretty," and without, but I'm not sure which file format to use as well as what to next do with that file.
The create index instructions don't go into how to create more than one at a time, and the bulk instructions only refer to creating/indexing documents, not creating the indices themselves. Any assistance on how to best follow the upgrade instructions would be appreciated.
I apparently don't have enough reputation to link the "create index" and "bulk" instructions, so apologies for that.

With a single curl command you could create an index template that will trigger the index creation at the time the documents hit your ES 5.x cluster.
Basically, this single curl command will create an index template that will kick in for each new index created on-the-fly. You can then use the "reindex from remote" technique in order to move your documents from ES 1.x to ES 5.x and don't worry about index creation since the index template will take care of it.
curl -XPUT 'localhost:9200/_template/my_template' -H 'Content-Type: application/json' -d'
{
"template": "*",
"settings": {
"index.refresh_interval" : -1,
"index.number_of_replicas" : 0
}
}
'

Was able to accomplish this with a formatted list of indices created via an index list fed through sed, then feeding that file through the following script:
#! /bin/bash
while read some_index; do
curl -XPUT "localhost:9200/$some_index?pretty" -d'
{
"settings" : {
"index" : {
"refresh_interval" : -1,
"number_of_replicas" : 0
}
}
}'
sleep 1
done <$1
If anyone can point me in the direction of any pre-existing mechanisms in Elasticsearch, though, please do.

Related

How to create index and type in elastic search?

I have installed elasticsearch version 2.3.2. I have to add index and type to that elasticsearch. Before I used sense plugin to achieve this. But the addon was removed from webstore. Please give suggestion.
Sense plugin is now a Kibana app. Please refer official reference for installation.
The answer of your question is, you can create index and type in Elasticsearch by running below curl command
curl -XPUT "http://localhost:9200/IndexName/TypeName"
You can use a Rest client like postman to do this. You can get the postman as a chrome extension.
The other way is to do an SSH into one of the nodes in your cluster and run the POST command using CURL.
`curl -X POST 'localhost:9200/bookindex/books' -H 'Content-Type: application/json' -d'
{
"bookId" : "A00-3",
"author" : "Sankaran",
"publisher" : "Mcgrahill",
"name" : "how to get a job"
}'
I will automatically create an index named 'bookindex' with type 'books' and index the data. If index and type already exist it will add the entry to the index.
All operations in Elasticsearch can be done via REST API calls.
To create an index use the index API
curl -XPUT 'localhost:9200/twitter?pretty' -H 'Content-Type: application/json' -d'{"settings" : {"index" : {"number_of_shards" : 3, "number_of_replicas" : 0 }}}'
To create the mapping the you can use the _mapping endpoint-
curl -XPUT http://localhost:9200/twitter/tweets/_mapping -d #"create_p4_schema_payload.json"
Here,mapping is provided via a json file name create_p4_schema_payload.json which contains the following-
{
"properties": {
"user_name": {
"type": "text"
}
}
}
All these can be run via any terminal which supports curl. For windows, you may install cygwin to run linux command from command prompt.
Like it was said above, you can access it through REST api calls. The command you need to run is:
curl -XPUT 'http://localhost:9200/IndexName?include_type_name=TypeName'
CURL is a raw text that can be imported into Postman, for example, or you can install it's CLI and simply run it. Simply put:
It's a PUT api call to the ElasticSearch/IndexName, adding the Query Parameter include_type_name.
The reference guide is at: Elastic Search - Create index API
Sense plugin is removed from chrome webstore. You could use Kibana which has sense like dev-tool to perform ElasticSearch queries.
Follow this link to install kibana.

How to get the list of indices created in Kibana?

I was able to retrieve the indices from Elasticsearch and register the corresponding index pattern in Kibana programmatically in Java. Now I would like to get the list of the index patterns already created in Kibana so that I could cross check it against the index list from Elasticsearch so as to not create them again in Kibana.
Is there an API to fetch the index pattern list from Kibana?
--
API for getting the list of indices from Elasticsearch:
http://{hostname}:{port}/_aliases
API for creating an index pattern in Kibana:
http://{hostname}:{port}/{kibana instance Id}/index-pattern/{index pattern title}
Use the next query:
GET /.kibana/index-pattern/_search
This query works (from kibana dev console):
GET .kibana/_search?size=10000
{
"_source": ["index-pattern.title"],
"query": {
"term": {
"type": "index-pattern"
}
}
}
Works for kibana 7.x:
Get all index patterns
curl -s 'http://192.168.100.100:5601/api/saved_objects/_find?fields=title&fields=type&per_page=10000&type=index-pattern'
# Use jq to get the index-pattern name:
curl -s 'http://192.168.100.100:5601/api/saved_objects/_find?fields=title&fields=type&per_page=10000&type=index-pattern' | jq '.saved_objects[].attributes.title'
"service01"
"service02"
"service03"
DELETE specific index pattern
curl -XDELETE -H 'kbn-xsrf: ""' 'http://192.168.100.100:5601/api/saved_objects/index-pattern/970070d0-f252-11ea-b492-31ec85db4535'
-H 'kbn-xsrf: ""' must be set or the API will complain {"statusCode":400,"error":"Bad Request","message":"Request must contain a kbn-xsrf header."}
use jq -r to get the value without qoute.
I'm afraid it still isn't available at the moment, where you could use an api to expose all the indexes which are being created in Kibana.
But keep in mind that you'll be able to create an index in Kibana, only if you've already created the indice in ES. So maybe you could consider checking your ES indices whether you've already got an existing one, if not create the index. Where you can make sure that, if the index isn't existing in your indices list, which means that there's no way that you would've went on and created an index in Kibana.
You can list them from the API:
GET _cat/indices/.marvel*
GET _cat/indices/.kibana
I looked at the Kibana (version 5.5) console and could get the same by doing this query
curl -X POST -H 'Content-Type: application/json' \
-d '{"query":{"match_all":{}},"size":10000}' \
http://$ES_HOST/.kibana/index-pattern/_search/\?stored_fields\=""
Please note that making a GET request to the above url as below will also return the fields, but they are limited to 10.
curl http://$ES_HOST/.kibana/index-pattern/_search/\?stored_fields\=""

Deleting a type in Elastic Search using curl

I am trying to delete a type in elastic search using curl script in bat file
ECHO Running Curl Script
curl -XDELETE "http://localhost/testing/" -d''
pause
The response that i got was No handler found for uri . I looked into documentation of Elastic Search and it says to use delete by query https://www.elastic.co/guide/en/elasticsearch/reference/5.0/docs-delete-by-query.html
How can i modify the my curl script to use this new api for ES 2.3
Thanks
If you want to use the delete-by-query API to delete all documents of a given type, you can do it like this:
curl -XDELETE "http://localhost/testing/_query?q=_type:typename"
However, you're better off deleting the index and recreating it so you can modify the mapping type as you see fit.
curl -XDELETE "http://localhost/testing/"
curl -XPUT "http://localhost/testing/" -d '{"settings": {...}, "mappings": {...}}'

How to delete all documents from an elasticsearch index

I am trying to delete all document from my index and getting the following error on CURL. No handler found for uri [/logstash-2016.03.11/logevent/] and method [DELETE]
Here is my delete command on Windows command.
curl -XDELETE "http://localhost:9200/logstash-2016.03.11/logevent/"
can anybody help?
curl -XPOST "http://localhost:9200/logstash2016.03.11/logevent/_delete_by_query" -d'
{
"query":{
"match_all":{}
}
}'
The delete-by-query API is new and should still be considered
experimental. The API may change in ways that are not backwards
compatible
https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html
You cannot delete a type from an index by executing a delete on the type.
To solve your problem, you have 2 solutions.
If you only have a single type in your logstash index, just execute curl -XDELETE "http://localhost:9200/logstash-2016.03.11. It will delete the old index, but logstash will recreate it when it'll process the next event
You install the delete by query plugin ( https://www.elastic.co/guide/en/elasticsearch/plugins/2.2/plugins-delete-by-query.html ) and run something like this :
curl -XDELETE /logstash-2016.03.11/logevent/_query -d '
{
"query": { "match_all": {}}
}'

How to (persistently) update the index.number_of_replicas setting in Elasticsearch without restarting the cluster?

In a running Elasticsearch cluster, the index.number_of_replicas setting in the configuration file is 1.
I could update this to 2 on a running cluster, by running
# curl -XPUT "http://127.0.0.1:9200/_settings?pretty" \
-d '{ "index": {"number_of_replicas":2}}'
{
"acknowledged" : true
}
Elasticsearch immediately creates the extra replicas for existing indexes.
However, newly created indexes have only 1 replica. How can the setting be persisted for newly created indexes too?
The API you used is to dynamically update the replica settings for existing indices.
If you want to apply them for the indices to be created in future , a better approach would be to use index template.
You can find more information on it here.
curl -XPUT localhost:9200/_template/template_1 -d '
{
"template" : "*",
"settings" : {
"number_of_replicas" : 2
}
}'
The above should work find for your case.

Resources