How to update value from a document in elasticsearch through Kibana - elasticsearch

POST /indexcn/doc/7XYIWHMB6jW2P6mpdcgv/_update
{
"doc" : {
"DELIVERYDATE" : 100
}
}
I am trying to update the DELIVERYDATE from 0 to 100, but I am getting document missing exception.
How to update the document with a new value?
Here is my index :
"hits" : [
{
"_index" : "indexcn",
"_type" : "_doc",
"_id" : "7XYIWHMB6jW2P6mpdcgv",
"_score" : 1.0,
"_source" : {
.......
.......
"DELIVERYDATE" : 0,
}

You actually got the mapping type wrong (doc instead of _doc). Try this and it will work:
fix this
|
v
POST /indexcn/_doc/7XYIWHMB6jW2P6mpdcgv/_update
{
"doc" : {
"DELIVERYDATE" : 100
}
}

Related

Can I update specific field value in elasticsearch?

I want to update count field in the following doc for example. Please help
{
"_index" : "test-object",
"_type" : "data",
"_id" : "2.5.179963",
"_score" : 10.039009,
"_source" : {
"object_id" : "2.5.179963",
"block_time" : "2022-04-09T13:16:32",
"block_number" : 46975476,
"parent" : "1.2.162932",
"field_type" : "1.3.2",
"count" : 57000,
"maintenance_flag" : false
}
}
you can simply use the Update API as
POST <your-index>/_update/<your-doc-id>
{
"doc": {
"count": "" // provide the value which you want to update
}
}

QuickSight or Elasticsearch - Column wise aggregration

Is this possible to do in QuickSight or Elasticsearch? I have tried calculated fields in QuickSight and runtime scripts in Elasticsearch, not sure how to do it? Also, is what I'm not what I'm expecting is even possible in this tool.
Trying out a simple date difference between columns based on their action, here... "Time taken for 'creating a post' after a user registered"
Data Input:
Data output
It is possible using scripted metric aggregation
Data
"hits" : [
{
"_index" : "index121",
"_type" : "_doc",
"_id" : "aqJ3HnoBF6_U07qsNY-s",
"_score" : 1.0,
"_source" : {
"user" : "Jen",
"activity" : "Logged In",
"activity_Time" : "2020-01-08"
}
},
{
"_index" : "index121",
"_type" : "_doc",
"_id" : "a6J3HnoBF6_U07qsXY_8",
"_score" : 1.0,
"_source" : {
"user" : "Jen",
"activity" : "Created a post",
"activity_Time" : "2020-05-08"
}
},
{
"_index" : "index121",
"_type" : "_doc",
"_id" : "bKJ3HnoBF6_U07qsk4-0",
"_score" : 1.0,
"_source" : {
"user" : "Mark",
"activity" : "Logged In",
"activity_Time" : "2020-01-03"
}
},
{
"_index" : "index121",
"_type" : "_doc",
"_id" : "baJ3HnoBF6_U07qsu48g",
"_score" : 1.0,
"_source" : {
"user" : "Mark",
"activity" : "Created a post",
"activity_Time" : "2020-01-08"
}
}
]
Query
{
"size": 0,
"aggs": {
"user": {
"terms": {
"field": "user.keyword",
"size": 10000
},
"aggs": {
"distinct_sum_feedback": {
"scripted_metric": {
"init_script": "state.docs = []",
"map_script": """ Map span = [
'timestamp':doc['activity_Time'],
'activity':doc['activity.keyword'].value
];
state.docs.add(span)
""",
"combine_script": "return state.docs;",
"reduce_script": """
def all_docs = [];
for (s in states)
{
for (span in s) {
all_docs.add(span);
}
}
all_docs.sort((HashMap o1, HashMap o2)->o1['timestamp'].getValue().toInstant().toEpochMilli().compareTo(o2['timestamp'].getValue().toInstant().toEpochMilli()));
Hashtable result= new Hashtable();
boolean found = false;
JodaCompatibleZonedDateTime loggedIn;
for (s in all_docs)
{
if(s.activity =='Logged In')
{
loggedIn=s.timestamp.getValue();
found= true;
}
if(s.activity =='Created a post' && found==true)
{
found=false;
def dt=loggedIn.getYear()+ '-' + loggedIn.getMonth() + '-' + loggedIn.getDayOfMonth();
def diff= s.timestamp.getValue().toInstant().toEpochMilli() - loggedIn.toInstant().toEpochMilli();
if(result.get(dt) == null)
{
result.put(dt, diff / 1000 / 60 / 60 / 24 )
}
}
}
return result;
"""
}
}
}
}
}
}
Result
"user" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "Jen",
"doc_count" : 2,
"distinct_sum_feedback" : {
"value" : {
"2020-JANUARY-8" : 121
}
}
},
{
"key" : "Mark",
"doc_count" : 2,
"distinct_sum_feedback" : {
"value" : {
"2020-JANUARY-3" : 5
}
}
}
]
}
Explanation
"init_script":
Executed prior to any collection of documents. Allows the aggregation
to set up any initial state.
Have declared a Map"
"map_script"
Executed once per document collected
Loop through all document and add activity and timestamp to map
combine_script
Executed once on each shard after document collection is complete
Return collection of Map for all shards
reduce_script
Executed once on the coordinating node after all shards have returned their results
Once again go through through all Map and create a single collection and sort on timestamp. Then go through sorted Map and insert logged in and next "created post" time (diff of logged in and post created time)

Check documents not existing at elasticsearch

I have millions of indexed documents. after indexing I figured that there is an document count mismatch. i want to send array of hundreds of document ids and search at Elastic search if those document ids exists?. and in response get ids that has not Indexed.
example:
these are indexed documents
[497499, 497550, 498370, 498476, 498639, 498726, 498826, 500479, 500780, 500918]
I'm sending 4 at a time
[497599, 88888, 497550, 77777]
response should be whats not at there
[88888, 77777]
You should consider using the _mget endpoint and then parse the result like for instance :
GET someidx/_mget?_source=false
{
"docs" : [
{
"_id" : "c37m5W4BifZmUly9Ni-X"
},
{
"_id" : "2"
}
]
}
Result :
{
"docs" : [
{
"_index" : "someidx",
"_type" : "_doc",
"_id" : "c37m5W4BifZmUly9Ni-X",
"_version" : 1,
"_seq_no" : 0,
"_primary_term" : 1,
"found" : true
},
{
"_index" : "someidx",
"_type" : "_doc",
"_id" : "2",
"found" : false
}
]
}

ElasticSearch Bulk with ingest plugin

I am using the Attachment Processor Attachment Processor in a Pipeline.
All work fine, but i wanted to do multiple post, then I tried to used bulk API.
Bulk work fine too, but I can't find how to send the url parameter "pipeline=attachment".
this put works :
POST testindex/type1/1?pipeline=attachment
{
"data": "Y291Y291",
"name" : "Marc",
"age" : 23
}
this bulk works :
POST _bulk
{ "index" : { "_index" : "testindex", "_type" : "type1", "_id" : "2" } }
{ "name" : "jean", "age" : 22 }
But how can I index Marc with his data field in bulk to be understood by the pipeline plugin?
thanks to Val comment, I did that and it work fine:
POST _bulk
{ "index" : { "_index" : "testindex", "_type" : "type1", "_id" : "2", "pipeline": "attachment"} } }
{"data": "Y291Y291", "name" : "jean", "age" : 22}

Trying Elasticsearch with Shield to kibana dashboard Getting Error?

The following sample data I haved used in my environment
Data:
{ "index" : { "_index" : "cases", "_type" : "case", "_id" : "101" } }
{ "admission" : "2015-01-03", "discharge" : "2015-01-04", "injury" : "broken arm" }
{ "index" : { "_index" : "cases", "_type" : "case", "_id" : "102" } }
{ "admission" : "2015-01-03", "discharge" : "2015-01-06", "injury" : "broken leg" }
{ "index" : { "_index" : "cases", "_type" : "case", "_id" : "103" } }
{ "admission" : "2015-01-06", "discharge" : "2015-01-07", "injury" : "broken nose" }
{ "index" : { "_index" : "cases", "_type" : "case", "_id" : "104" } }
{ "admission" : "2015-01-07", "discharge" : "2015-01-07", "injury" : "bruised arm" }
{ "index" : { "_index" : "cases", "_type" : "case", "_id" : "105" } }
{ "admission" : "2015-01-08", "discharge" : "2015-01-10", "injury" : "broken arm" }
{ "index" : { "_index" : "patients", "_type" : "patient", "_id" : "101" } }
{ "name" : "Adam", "age" : 28 }
{ "index" : { "_index" : "patients", "_type" : "patient", "_id" : "102" } }
{ "name" : "Bob", "age" : 45 }
{ "index" : { "_index" : "patients", "_type" : "patient", "_id" : "103" } }
{ "name" : "Carol", "age" : 34 }
{ "index" : { "_index" : "patients", "_type" : "patient", "_id" : "104" } }
{ "name" : "David", "age" : 14 }
{ "index" : { "_index" : "patients", "_type" : "patient", "_id" : "105" } }
{ "name" : "Eddie", "age" : 72 }
Indexed the data into the node
$ curl -X POST 'http://localhost:9200/_bulk' --data-binary #./hospital.json
[2015-02-12 08:18:01,347][INFO ][shield.license ] [node0] enabling license for [shield]
[2015-02-12 08:18:01,347][INFO ][license.plugin.core ] [node0] license for [shield] - valid
[2015-02-12 08:18:01,355][ERROR][shield.license ] [node0]
#
# Shield license will expire on [Saturday, March 14, 2015]. Cluster health, cluster stats and indices stats operations are
# blocked on Shield license expiration. All data operations (read and write) continue to work. If you
# have a new license, please update it. Otherwise, please reach out to your support contact.
#
Installed Shield and started as the above
The data is protected and I can see like below if i'm trying to access.
$ curl localhost:9200/cases/case/101?pretty=true
{
"error" : "AuthenticationException[missing authentication token for REST request [/cases/case/1]]",
"status" : 401
}
User and roles are added like below
$ elasticsearch/bin/shield/esusers useradd alice -r nurse
$ elasticsearch/bin/shield/esusers useradd bob -r doctor
I have edited the roles.yml and tried to add doctor and nurse according to the eg mentioned above. The security is not worked for me.
ubuntu#ip-10-142-247-183:~/elkproject/elasticsearch-1.4.4/config/shield$ curl --user alice:abc123 localhost:9200/_count?pretty=true
{
"error" : "AuthenticationException[unable to authenticate user [alice] for REST request [/_count?pretty=true]]",
"status" : 401
}
Note : I referred this blog http://blog.trifork.com/2015/03/05/shield-your-kibana-dashboards/
Any help would be highly appreciated
Did you install elasticsearch from a package (like a RPM or DEB)? If so, there may be an issue with the esusers tool putting the users in the wrong place. Right now, you have to configure your environment with the right location and add the users. If this is the case, you can move the $ES_HOME/config/shield directory to /etc/elasticsearch, which is the default configuration directory for RPM and DEB installations. When using the esusers commands in the future, just make sure the environment variables are set like shown in the link.
You can also remove Shield and start the install over following the full getting started guide and then start modifying the files as mentioned in the blog. To remove the existing Shield install: bin/plugin -r shield

Resources