I want to update count field in the following doc for example. Please help
{
"_index" : "test-object",
"_type" : "data",
"_id" : "2.5.179963",
"_score" : 10.039009,
"_source" : {
"object_id" : "2.5.179963",
"block_time" : "2022-04-09T13:16:32",
"block_number" : 46975476,
"parent" : "1.2.162932",
"field_type" : "1.3.2",
"count" : 57000,
"maintenance_flag" : false
}
}
you can simply use the Update API as
POST <your-index>/_update/<your-doc-id>
{
"doc": {
"count": "" // provide the value which you want to update
}
}
Related
POST /indexcn/doc/7XYIWHMB6jW2P6mpdcgv/_update
{
"doc" : {
"DELIVERYDATE" : 100
}
}
I am trying to update the DELIVERYDATE from 0 to 100, but I am getting document missing exception.
How to update the document with a new value?
Here is my index :
"hits" : [
{
"_index" : "indexcn",
"_type" : "_doc",
"_id" : "7XYIWHMB6jW2P6mpdcgv",
"_score" : 1.0,
"_source" : {
.......
.......
"DELIVERYDATE" : 0,
}
You actually got the mapping type wrong (doc instead of _doc). Try this and it will work:
fix this
|
v
POST /indexcn/_doc/7XYIWHMB6jW2P6mpdcgv/_update
{
"doc" : {
"DELIVERYDATE" : 100
}
}
In Kibana I have many dozens of indices.
Given one of them, I want a way to find all the saved objects (searches/dashboards/visualizations) that rely on this index.
Thanks
You can retrieve the document ID of your index pattern and then use that to search your .kibana index
{
"_index" : ".kibana",
"_type" : "index-pattern",
"_id" : "AWBWDmk2MjUJqflLln_o", <---- take this id...
You can use this query on Kibana 5:
GET .kibana/_search?q=AWBWDmk2MjUJqflLln_o <---- ...and use it here
You'll find your visualizations:
{
"_index" : ".kibana",
"_type" : "visualization",
"_id" : "AWBZNJNcMjUJqflLln_s",
"_score" : 6.2450323,
"_source" : {
"title" : "CA groupe",
"visState" : """{"title":"XXX","type":"pie","params":{"addTooltip":true,"addLegend":true,"legendPosition":"right","isDonut":false,"type":"pie"},"aggs":[{"id":"1","enabled":true,"type":"sum","schema":"metric","params":{"field":"XXX","customLabel":"XXX"}},{"id":"2","enabled":true,"type":"terms","schema":"segment","params":{"field":"XXX","size":5,"order":"desc","orderBy":"1","customLabel":"XXX"}}],"listeners":{}}""",
"uiStateJSON" : "{}",
"description" : "",
"version" : 1,
"kibanaSavedObjectMeta" : {
"searchSourceJSON" : """{"index":"AWBWDmk2MjUJqflLln_o","query":{"match_all":{}},"filter":[]}"""
^
|
this is where your index pattern is used
}
}
},
I am using the Attachment Processor Attachment Processor in a Pipeline.
All work fine, but i wanted to do multiple post, then I tried to used bulk API.
Bulk work fine too, but I can't find how to send the url parameter "pipeline=attachment".
this put works :
POST testindex/type1/1?pipeline=attachment
{
"data": "Y291Y291",
"name" : "Marc",
"age" : 23
}
this bulk works :
POST _bulk
{ "index" : { "_index" : "testindex", "_type" : "type1", "_id" : "2" } }
{ "name" : "jean", "age" : 22 }
But how can I index Marc with his data field in bulk to be understood by the pipeline plugin?
thanks to Val comment, I did that and it work fine:
POST _bulk
{ "index" : { "_index" : "testindex", "_type" : "type1", "_id" : "2", "pipeline": "attachment"} } }
{"data": "Y291Y291", "name" : "jean", "age" : 22}
I am new to Elasticsearch.
I am trying to upload my existing MySql data to Elasticsearch. Elasticsearch bulk import uses json as the data format. That's why I converted my data to the json format.
employee.json:
[{"EmpId":"101", "Name":"John Doe", "Dept":"IT"}
{"EmpId":"102", "Name":"FooBar", "Dept":"HR"}]
But I am not able to upload my data using the following curl command:
post: curl -XPOST 'localhost:9200/_bulk?pretty' --data-binary #employee.json
I get a parsing exception message.
After reading a document(https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html), I realized that the data format should be something like this:
action_and_meta_data\n
optional_source\n
action_and_meta_data\n
optional_source\n
....
action_and_meta_data\n
optional_source\n
I am still not sure how to format my data in the above format and perform the upload operation.
Basically I want to know the exact data format that is expected by the Elasticsearch bulk upload. And would also like to know whether my curl command is correct.
You data should be in form:
// if you want to use emp id as doc id specify otherwise dont add _id part
{ "index" : { "_index" : "index_name", "_type" : "type_name", "_id" : "101" } }
{"EmpId":"101", "Name":"John Doe", "Dept":"IT"}
{ "index" : { "_index" : "index_name", "_type" : "type_name", "_id" : "102" } }
{"EmpId":"102", "Name":"FooBar", "Dept":"HR"}
....
Or you can use logstash: https://www.elastic.co/blog/logstash-jdbc-input-plugin
From the docs:
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "1" } }
{ "field1" : "value1" }
{ "delete" : { "_index" : "test", "_type" : "type1", "_id" : "2" } }
{ "create" : { "_index" : "test", "_type" : "type1", "_id" : "3" } }
{ "field1" : "value3" }
{ "update" : {"_id" : "1", "_type" : "type1", "_index" : "index1"} }
{ "doc" : {"field2" : "value2"} }
So you would probably want your file to read something like
{ "update" : {"_id" : "101", "_type" : "foo", "_index" : "bar"} }
{"EmpId":"101", "Name":"John Doe", "Dept":"IT"}
https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html
Here is my problem, I'm trying to insert a bunch of data into elastic search and to vizualize it using kibana, however I got an issue with kibana timestamp recognition.
My time field is called "dateStart", and I tried to use it as a timestamp using the following command :
curl -XPUT 'localhost:9200/test/type1/_mapping' -d'{ "type1" :{"_timestamp":{"enabled":true, "format":"yyyy-MM-dd HH:mm:ss","path":"dateStart"}}}'
But this command give me the following error message :
{"error":"MergeMappingException[Merge failed with failures {[Cannot update path in _timestamp value. Value is null path in merged mapping is missing]}]","status":400}
I'm not sure to understand what I do with this command, but what I would like to do is telling to elastic search and kibana to use my "dateStart" field as a timestamp.
Here is a sample of my insert file (I use bulk insert) :
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "1"} }
{ "dateStart" : "15-03-31 06:00:00", "score":0.9920092243874442}
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "2"} }
{ "dateStart" : "15-03-23 06:00:00", "score":0.0}
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "3"} }
{ "dateStart" : "15-03-29 12:00:00", "score":0.0}