'Unknown BaseAggregationBuilder [composite] error' when running elasticsearch composite aggregation - elasticsearch

I'm trying to create a composite aggregation per the documentation here:
https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search-aggregations-bucket-composite-aggregation.html
I'm basically following this example:
curl -X GET "localhost:9200/_search?pretty" -H 'Content-Type: application/json' -d'
{
"aggs" : {
"my_buckets": {
"composite" : {
"sources" : [
{ "product": { "terms" : { "field": "product" } } }
]
}
}
}
}
'
but every time I try to run the code I get the below error regardless of which field I try to aggregate on:
{
"error" : {
"root_cause" : [
{
"type" : "unknown_named_object_exception",
"reason" : "Unknown BaseAggregationBuilder [composite]",
"line" : 5,
"col" : 27
}
],
"type" : "unknown_named_object_exception",
"reason" : "Unknown BaseAggregationBuilder [composite]",
"line" : 5,
"col" : 27
},
"status" : 400
}
I did some digging around and haven't seen the error 'Unknown BaseAggregationBuilder [composite]' come up anywhere else so I thought I'd post this question here to see if anyone has run into a similar issue. Cardinality and regular terms aggregation work fine. Also to clarify, I'm running on v6.8

Composite aggs were released in 6.1.0. The error sounds like you cannot possibly be using >=6.1 but some older ver.
What's your version.number when you run curl -X GET "localhost:9200"?

Related

Date range search in Elassandra

I have created a index like below.
curl -XPUT -H 'Content-Type: application/json' 'http://x.x.x.x:9200/date_index' -d '{
"settings" : { "keyspace" : "keyspace1"},
"mappings" : {
"table1" : {
"discover":"sent_date",
"properties" : {
"sent_date" : { "type": "date", "format": "yyyy-MM-dd HH:mm:ssZZ" }
}
}
}
}'
I need to search the results pertaining to date range, example "from" : "2039-05-07 11:22:34+0000", "to" : "2039-05-07 11:22:34+0000" both inclusive.
I am trying like this,
curl -XGET -H 'Content-Type: application/json' 'http://x.x.x.x:9200/date_index/_search?pretty=true' -d '
{
"query" : {
"aggregations" : {
"date_range" : {
"sent_date" : {
"from" : "2039-05-07 11:22:34+0000",
"to" : "2039-05-07 11:22:34+0000"
}
}
}
}
}'
I am getting error as below.
"error" : {
"root_cause" : [
{
"type" : "parsing_exception",
"reason" : "no [query] registered for [aggregations]",
"line" : 4,
"col" : 22
}
],
"type" : "parsing_exception",
"reason" : "no [query] registered for [aggregations]",
"line" : 4,
"col" : 22
},
"status" : 400
Please advise.
The query seems to be malformed. Please see the date range aggregation documentation at https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-daterange-aggregation.html and note the differences:
you're introducing a query without defining any - do you need one?
you should use aggs instead of aggregations
you should name your aggregation

action [clustering/cluster] is unauthorized for user [elastic]

Elasticsearch has three nodes in my cluster, I am using plugin elasticsearch-carrot2, and elastic is a superuser in elasticsearch.
The requests I sent is below:
curl -XPOST --user elastic:**** -H "Content-Type: application/json"
'http://ip:port/index/type/_search_with_clusters?pretty=true' -d '
{
"search_request": {
"_source" : [
"title",
"body"
],
"query" : {
"match" : {
"title" : "something"
}
},
"size": 100
},
"query_hint": "something",
"field_mapping": {
"title" : ["_source.title", "_source.body"]
}
}'
Unfortunately I get following error:
{
"error" : {
"root_cause" : [
{
"type" : "security_exception",
"reason" : "action [clustering/cluster] is unauthorized
for user [elastic]"
}
],
"type" : "security_exception",
"reason" : "action [clustering/cluster] is unauthorized for user
[elastic]"
},
"status" : 403
}
The problem comes from the fact that the plugin doesn't work with XPack security.
More info can be seen in this issue: https://github.com/carrot2/elasticsearch-carrot2/issues/69

Getting unknown setting [index._id] error while adding data to Elasticsearch

I have created a mapping eventlog in Elasticsearch 5.1.1. I added it successfully however while adding data under it, I am getting Illegal_argument_exception with reason unknown setting [index._id]. My result from getting the indices is yellow open eventlog sX9BYIcOQLSKoJQcbn1uxg 5 1 0 0 795b 795b
My mapping is:
{
"mappings" : {
"_default_" : {
"properties" : {
"datetime" : {"type": "date"},
"ip" : {"type": "ip"},
"country" : { "type" : "keyword" },
"state" : { "type" : "keyword" },
"city" : { "type" : "keyword" }
}
}
}
}
and I am adding the data using
curl -u elastic:changeme -XPUT 'http://localhost:8200/eventlog' -d '{"index":{"_id":1}}
{"datetime":"2016-03-31T12:10:11Z","ip":"100.40.135.29","country":"US","state":"NY","city":"Highland"}';
If I don't include the {"index":{"_id":1}} line, I get Illegal_argument_exception with reason unknown setting [index.apiKey].
The problem was arising with sending the data from the command line as a string. Keeping the data in a JSON file and sending it as binary solved it. The correct command is:
curl -u elastic:changeme -XPUT 'http://localhost:8200/eventlog/_bulk?pretty' --data-binary #eventlogs.json

Elasticsearch does not found an existing document using a the DSL

I dont know why, using the URI Search way to search a document is returning the right document, but the document is not found if I use the API DSL.
To reproduce the issue:
Without any index created, I insert this document:
curl http://localhost:9299/integrationtest-index/searchable/ID_XXXX2 -d '{ "ref" : "XXXX2", "field1" : "value1" }'
So the index is created automatically with the default mapping (type searchable):
curl http://localhost:9299/integrationtest-index?pretty
{
"integrationtest-index" : {
"aliases" : { },
"mappings" : {
"searchable" : {
"properties" : {
"field1" : {
"type" : "string"
},
"ref" : {
"type" : "string"
}
}
}
},
"settings" : {
"index" : {
"field1" : "value1",
"ref" : "XXXX2",
"number_of_shards" : "5",
"creation_date" : "1466780216631",
"number_of_replicas" : "1",
"uuid" : "GBj2VF-wQy6JP74AqoIn5g",
"version" : {
"created" : "2020099"
}
}
},
"warmers" : { }
}
}
This query return one document:
curl http://localhost:9299/integrationtest-index/searchable/_search?q=ref:XXXX2
But this other query response that does not exist:
curl -XPOST http://localhost:9299/integrationtest-index/searchable/_search/exists -d '
{
"query": {
"term" : {
"ref" : "XXXX2"
}
}
}'
Why the last query said that the document does not exist?
Environment:
ElasticSearch 2.2.0
Ubuntu 16.04 LTS
OpenJDK Runtime Environment (build 1.8.0_91-8u91-b14-0ubuntu4~16.04.1-b14)
I have the same problem every few months, so I decided response myself and share my stupids errors.
By default, elasticsearch use index:analyzed, so the query with term does not found any document.
If you use the URI Search way, elasticsearch is executing a query_string and not a term query.
This query is working:
curl -XPOST http://localhost:9299/integrationtest-index/searchable/_search/exists -d '
{
"query": {
"match" : {
"ref" : "XXXX2"
}
}
}'
More information in the documentation, in the section Why doesn’t the term query match my document?

Check null parameter in painless - Elasticsearch

How are we supposed to check whether a key in map passed as parameter to a painless script has a value? I am currently doing this in Elasticsearch 6.8.4
if (params.feedId!=null) {
whatever()
}
but it throws this exception when
params.feedId
is null
ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=Cannot cast null to a primitive type [void].]]
It works fine with Elasticsearch 5.6.13, so there seems to be a breaking change but I couldn't find anything so far.
UPDATE: It was actually not a problem with this part of the script but in some parts of it we were doing return null; to break the process of the script, which does not seem to be working anymore for some reason. To give more context, I am doing those updates using the Java API, more specifically the Rest High Level client.
I have been able to reproduce the issue using a curl command like this
curl -X POST "localhost:9200/myindex/_doc/1/_update?pretty" -H 'Content-Type: application/json' -d'
{
"script": {
"source": "ctx._source.result = true; return null;"
}
}'
This produces the next output:
{
"error" : {
"root_cause" : [
{
"type" : "remote_transport_exception",
"reason" : "[sBix--f][172.18.0.4:9300][indices:data/write/update[s]]"
}
],
"type" : "illegal_argument_exception",
"reason" : "failed to execute script",
"caused_by" : {
"type" : "script_exception",
"reason" : "compile error",
"script_stack" : [
"... ce.result = true; return null;",
" ^---- HERE"
],
"script" : "ctx._source.result = true; return null;",
"lang" : "painless",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "Cannot cast null to a primitive type [void]."
}
}
},
"status" : 400
}
Just replacing return null; for return; makes this script working fine in ES 6.8.4.
Found this issue at Elasticsearch: https://github.com/elastic/elasticsearch/issues/35888. There are some inconsistencies in the way short-circuit works in different versions of Elasticsearch.

Resources