I get this error when I try to push data:
[2017-09-28T22:58:13,583][DEBUG][o.e.a.b.TransportShardBulkAction]
[fE76H5K] [sw_shop5_20170928225616][3] failed to execute bulk item
(index) BulkShardRequest [[sw_shop5_20170928225616][3]] containing
[index {[sw_shop5_20170928225616][product][A40482001], source[n/a,
actual length: [41.6kb], max length: 2kb]}]
Can I extend the length in elasticsearch? And If so in the yml File or via curl?
Also I am getting :
Limit of total fields [1000] in index [sw_shop5_20170928231741] has been exceeded
I tried to set it with the curl-call:
curl -XPUT 'localhost:9200/_all/_settings' -d ' { "index.mapping.total_fields.limit": 1000000 }'
But this I can only apply when the index is up already - the software I use always generates a new index and setting it in the eleasticsearch.yml is not possible because I get this:
Since elasticsearch 5.x index level settings can NOT be set on the nodes configuration like the elasticsearch.yaml, in system properties or command line arguments.In order to upgrade all indices the settings must be updated via the /${index}/_settings API. Unless all settings are dynamic all indices must be closed in order to apply the upgradeIndices created in the future should use index templates to set default values.
Please ensure all required values are updated on all indices by executing:
curl -XPUT 'http://localhost:9200/_all/_settings?preserve_existing=true' -d '{ "index.mapping.total_fields.limit" : "100000" }'
With setting this:
index.mapping.total_fields.limit: 100000
Check the full stack trace in the ES log in the server.
I got this same error and the stack trace pointed to a mapping issue:
java.lang.IllegalArgumentException: mapper [my_field] of different type, current_type [keyword], merged_type [text]
Related
To get the structure of an Elasticsearch index via CLI, we can do:
curl -u myuser:p4ssw0rd -XGET "https://myeshost:9200/myindexname"
Is there a way to get the structure (or other information) about a Kibana index pattern, or get the list of all Kibana index patterns that have been created? I haven't found information about this on the documentation.
There is a way to retrieve all Kibana index-patterns using the command below:
GET .kibana/_search?size=100&q=type:"index-pattern"
Note: if you have more than 100 index-patterns, you might want to increase the size.
Using the functions _stats or _settings:
curl -u myuser:p4ssw0rd -XGET "https://myeshost:9200/myindexname/_stats"
curl -u myuser:p4ssw0rd -XGET "https://myeshost:9200/myindexname/_settings"
Reference:
https://www.elastic.co/guide/en/elasticsearch/reference/6.3/indices-stats.html
https://www.elastic.co/guide/en/elasticsearch/reference/6.3/indices-get-settings.html
I'm trying to use the _analyze api with text that looks like this:
--- some -- text ---
This request works as expected:
curl localhost:9200/my_index/_analyze -d '--'
{"tokens":[]}
However, this one fails:
curl localhost:9200/medical_documents/_analyze -d '---'
---
error:
root_cause:
- type: "illegal_argument_exception"
reason: "Malforrmed content, must start with an object"
type: "illegal_argument_exception"
reason: "Malforrmed content, must start with an object"
status: 400
Considering the formatting of the response, i assume that elasticsearch tried to parse the request as yaml and failed.
If that is the case, how can i disable yml parsing, or _analyze a text that starts with --- ?
The problem is not the yaml parser. The problem is that you are trying to create a type.
The following is incorrect(will give you Malforrmed content, must start with an object error)
curl localhost:9200/my_index/medical_documents/_analyze -d '---'
This will give you no error, but is incorrect. Because it will tell elastic to create a new type.
curl localhost:9200/my_index/medical_documents/_analyze -d '{"analyzer" : "standard","text" : "this is a test"}'
Analyzers are created Index level. verify with:
curl -XGET 'localhost:9200/my_index/_settings'<br/>
So the proper way is:
curl -XGET 'localhost:9200/my_index/_analyze' -d '{"analyzer" : "your_analyzer_name","text" : "----"}'
Previously need to create the analyzer.
I was able to retrieve the indices from Elasticsearch and register the corresponding index pattern in Kibana programmatically in Java. Now I would like to get the list of the index patterns already created in Kibana so that I could cross check it against the index list from Elasticsearch so as to not create them again in Kibana.
Is there an API to fetch the index pattern list from Kibana?
--
API for getting the list of indices from Elasticsearch:
http://{hostname}:{port}/_aliases
API for creating an index pattern in Kibana:
http://{hostname}:{port}/{kibana instance Id}/index-pattern/{index pattern title}
Use the next query:
GET /.kibana/index-pattern/_search
This query works (from kibana dev console):
GET .kibana/_search?size=10000
{
"_source": ["index-pattern.title"],
"query": {
"term": {
"type": "index-pattern"
}
}
}
Works for kibana 7.x:
Get all index patterns
curl -s 'http://192.168.100.100:5601/api/saved_objects/_find?fields=title&fields=type&per_page=10000&type=index-pattern'
# Use jq to get the index-pattern name:
curl -s 'http://192.168.100.100:5601/api/saved_objects/_find?fields=title&fields=type&per_page=10000&type=index-pattern' | jq '.saved_objects[].attributes.title'
"service01"
"service02"
"service03"
DELETE specific index pattern
curl -XDELETE -H 'kbn-xsrf: ""' 'http://192.168.100.100:5601/api/saved_objects/index-pattern/970070d0-f252-11ea-b492-31ec85db4535'
-H 'kbn-xsrf: ""' must be set or the API will complain {"statusCode":400,"error":"Bad Request","message":"Request must contain a kbn-xsrf header."}
use jq -r to get the value without qoute.
I'm afraid it still isn't available at the moment, where you could use an api to expose all the indexes which are being created in Kibana.
But keep in mind that you'll be able to create an index in Kibana, only if you've already created the indice in ES. So maybe you could consider checking your ES indices whether you've already got an existing one, if not create the index. Where you can make sure that, if the index isn't existing in your indices list, which means that there's no way that you would've went on and created an index in Kibana.
You can list them from the API:
GET _cat/indices/.marvel*
GET _cat/indices/.kibana
I looked at the Kibana (version 5.5) console and could get the same by doing this query
curl -X POST -H 'Content-Type: application/json' \
-d '{"query":{"match_all":{}},"size":10000}' \
http://$ES_HOST/.kibana/index-pattern/_search/\?stored_fields\=""
Please note that making a GET request to the above url as below will also return the fields, but they are limited to 10.
curl http://$ES_HOST/.kibana/index-pattern/_search/\?stored_fields\=""
I'm trying to upgrade our ELK stack from 1.x > 5.x following the re-index from remote instructions. I'm not sure of how to export a list of the indices that I need to create and then import that list into the new instance. I've created a list of indices using this command, both with "pretty," and without, but I'm not sure which file format to use as well as what to next do with that file.
The create index instructions don't go into how to create more than one at a time, and the bulk instructions only refer to creating/indexing documents, not creating the indices themselves. Any assistance on how to best follow the upgrade instructions would be appreciated.
I apparently don't have enough reputation to link the "create index" and "bulk" instructions, so apologies for that.
With a single curl command you could create an index template that will trigger the index creation at the time the documents hit your ES 5.x cluster.
Basically, this single curl command will create an index template that will kick in for each new index created on-the-fly. You can then use the "reindex from remote" technique in order to move your documents from ES 1.x to ES 5.x and don't worry about index creation since the index template will take care of it.
curl -XPUT 'localhost:9200/_template/my_template' -H 'Content-Type: application/json' -d'
{
"template": "*",
"settings": {
"index.refresh_interval" : -1,
"index.number_of_replicas" : 0
}
}
'
Was able to accomplish this with a formatted list of indices created via an index list fed through sed, then feeding that file through the following script:
#! /bin/bash
while read some_index; do
curl -XPUT "localhost:9200/$some_index?pretty" -d'
{
"settings" : {
"index" : {
"refresh_interval" : -1,
"number_of_replicas" : 0
}
}
}'
sleep 1
done <$1
If anyone can point me in the direction of any pre-existing mechanisms in Elasticsearch, though, please do.
I am trying to delete all indices of one type. Tried to execute this :
curl -XDELETE 'http://localhost:9200/myindex/mytype/_query' -d '{"query": {"match_all": {}}}'
But this doesn't delete anything. The following query shows that my index is still there.
curl -XGET 'http://localhost:9200/myindex/mytype/_search' | json -i
What am I doing wrong?
to delete the whole type you call DELETE on it
curl -XDELETE localhost:9200/myindex/mytype
this deletes the type mytype in the index myindex.
be aware, that this deletes also any mappings etc with the type. but i would consider this a more resource-friendly way (but can not proove it)
Nevermind. I found this answer myself from this question Delete records from Elasticsearch by query
The body to send with the request is just the query. So basically the right request is :
curl -XDELETE 'http://localhost:9200/myindex/mytype/_query' -d '{"match_all": {}}'