Return only count value on _count query - elasticsearch

Executing the following
http://172.21.21.151:9200/printer-stats-*/_doc/_count
I get the following response
{
"count": 19299,
"_shards": {
"total": 44,
"successful": 44,
"skipped": 0,
"failed": 0
}
}
How can i modify the query to only return
{
"count": 19299
}
On _search queries we can use filter_path to only get the desired output, but this is not working on count as it seems.
I also tried to add a body like the following
{
"_shards": false
}
But it throws the following error
{
"error": {
"root_cause": [
{
"type": "parsing_exception",
"reason": "request does not support [_shards]",
"line": 2,
"col": 3
}
],
"type": "parsing_exception",
"reason": "request does not support [_shards]",
"line": 2,
"col": 3
},
"status": 400
}
My version is 7.9.2
Probably this has been asked before, but I have not found a relevant question.

filter_path definitely works on the _count endpoint:
http://172.21.21.151:9200/printer-stats-*/_doc/_count?filter_path=count
Response =>
{
"count": 19299
}

Related

no shard available action exception on kibana discover

when i wanted to view the logs on kibana, i recieved this error :
1 of 37 shards failed The data you are seeing might be incomplete or wrong.
this is Response:
{
"took": 10,
"timed_out": false,
"_shards": {
"total": 21,
"successful": 20,
"skipped": 20,
"failed": 1,
"failures": [
{
"shard": 0,
"index": "tourism-2022.12.11",
"node": null,
"reason": {
"type": "no_shard_available_action_exception",
"reason": null,
"index_uuid": "j2J6dUvTQ_q7qeyyU56bag",
"shard": "0",
"index": "tourism-2022.12.11"
}
}
]
},
"hits": {
"total": 0,
"max_score": 0,
"hits": []
}
}
i delete some indexes
expand pvc
but doesnt work anything
Via Kibana console check the cluster status, if you haven't got Kibana available convert the command to curl request.
Check the Elasticsearch cluster status:
GET /_cluster/health?wait_for_status=yellow&timeout=50s
Check index status:
GET /_cat/indices/tourism-2022.12.11?v=true&s=index
All shards are green, have you got documents available in your index ?

Elasticsearch 7.11.1 not recognizing ?realtime parameter

I'm trying to debug a major performance bottleneck after upgrading Elasticsearch to 7.11.1 - I'm experiencing slow PUT inserts/updates (which I do a lot of) and assume it relates changes to the way indexes are managed.
I found the new parameter realtime and thought I'd give it a shot but I get unrecognized parameter: [realtime] when trying it.
GET http://localhost:9200
{
"name": "myhost",
"cluster_name": "mycluster",
"cluster_uuid": "uc03F4mpq1mO8CzQSzfB1g",
"version": {
"number": "7.11.1",
"build_flavor": "default",
"build_type": "rpm",
"build_hash": "ff17057114c2199c9c1bbecc727003a907c0db7a",
"build_date": "2021-02-15T13:44:09.394032Z",
"build_snapshot": false,
"lucene_version": "8.7.0",
"minimum_wire_compatibility_version": "6.8.0",
"minimum_index_compatibility_version": "6.0.0-beta1"
},
"tagline": "You Know, for Search"
}
GET http://localhost:9200/foo/bar/_count?q=foo:bar
{
"count": 382,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
}
}
GET http://localhost:9200/foo/bar/_count?q=foo:bar&realtime=false
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "request [/foo/bar/_count] contains unrecognized parameter: [realtime]"
}
],
"type": "illegal_argument_exception",
"reason": "request [/foo/bar/_count] contains unrecognized parameter: [realtime]"
},
"status": 400
}
I've double checked the manual and my version. I have 7.11.1, the manual page is 7.11:
https://www.elastic.co/guide/en/elasticsearch/reference/7.11/docs-get.html#realtime
Any help appreciated.

Error while remote indexing with elasticsearch

I'am trying to move from an ES cluster to another, in order to plan an update.
The both are same version (6.4). To achieve this, i'am using this command :
curl -XPOST -H "Content-Type: application/json" http://new_cluster/_reindex -d#reindex.json
And the reindex.json, is looking like this :
{
"source": {
"remote": {
"host": "http://old_cluster:9199"
},
"index": "megabase.33.2",
"query": {
"match_all": {}
}
},
"dest": {
"index": "megabase.33.2"
}
}
I whitelisted one the new cluster the old cluster, and its works but i can't go to the end of the migration of data, because i have this error, and i don't understand what it means here :
{
"took":1762,
"timed_out":false,
"total":8263428,
"updated":5998,
"created":5001,
"deleted":0,
"batches":11,
"version_conflicts":0,
"noops":0,
"retries":{
"bulk":0,
"search":0
},
"throttled_millis":0,
"requests_per_second":-1.0,
"throttled_until_millis":0,
"failures":[
{
"index":"megabase.33.2",
"type":"persona",
"id":"noYOA3IBTWbNbLJUqk6T",
"cause":{
"type":"mapper_parsing_exception",
"reason":"failed to parse [adr_inse]",
"caused_by":{
"type":"illegal_argument_exception",
"reason":"For input string: \"2A004\""
}
},
"status":400
}
]
}
The record in the original cluster look like this :
{
"took": 993,
"timed_out": false,
"_shards": {
"total": 4,
"successful": 4,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 0,
"hits": [
{
"_index": "megabase.33.2",
"_type": "persona",
"_id": "noYOA3IBTWbNbLJUqk6T",
"_score": 0,
"_source": {
"address": "Obfucated",
"adr_inse": "2A004",
"age": 10,
"base": "Obfucated",
"city": "Obfucated",
"cp": 20167,
"email_md5": "Obfucated",
"fraicheur": "2020-01-12T19:39:04+01:00",
"group": 1,
"latlon": "Obfucated",
"partner": "Obfucated",
"partnerbase": 2,
"sex": 2,
"sms_md5": "Obfucated"
}
}
]
}
}
Any clue on what i'am doing wrong ?
Thanks a lot
Found out, the mapping is not well created when using only the reindex method.
So i drop the new indice, recreate mapping using elasticdump :
elasticdump --input=http://oldcluster/megabase.33.2 --output=http://newcluster/megabase.33.2 --type=mapping
Then run the previous script, everything works flawless (and was rather quick)

No results from search when passing more than one parameter in user metadata

I want to apply document level security in elastic, but once I provide more than one value in user metadata I get no matches.
I am creating a role and a user in elastic and passing values inside user metadata to the role on whose basis the search should happen. It works fine if I give one value.
For creating role:
PUT _xpack/security/role/my_policy
{
"indices": [{
"names": ["my_index"],
"privileges": ["read"],
"query": {
"template": {
"source": "{\"bool\": {\"filter\": [{\"terms_set\": {\"country_name\": {\"terms\": {{#toJson}}_user.metadata.country_name{{/toJson}},\"minimum_should_match_script\":{\"source\":\"params.num_terms\"}}}}]}}"
}
}
}]
}
And for user:
PUT _xpack/security/user/jack_black
{
"username": "jack_black",
"password":"testtest",
"roles": ["my_policy"],
"full_name": "Jack Black"
"email": "jb#tenaciousd.com",
"metadata": {
"country_name": ["india" , "japan"]
}
}
I expect the output to be results for india and japan only. If the user searches for anything else they should get no results.
However, I do not see any results at all:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}

Possible to get the size_in_bytes for records matching a specific query?

The documentation on the stats api indicates that we can do the following:
http://es.cluster.ip.addr:9200/indexname/_stats
Which resuls in an output like:
{
"_shards": {
"total": 1,
"successful": 1,
"failed": 0
},
"_all": {
"primaries": {
"docs": {
"count": 32930,
"deleted": 0
},
"store": {
"size_in_bytes": 3197332,
"throttle_time_in_millis": 0
},
// ... etc
}
}
}
My question is, is there a way to obtain the file size for a specific set of records, specific such as when we run a search query:
http://es.cluster.ip.addr:9200/indexname/type/_search?q=identifier:123
So essentially, the size_in_bytes for all records matching the identifier 123?

Resources