I'm performing some operation in Dataflow and putting document in ElasticSearch index.While trying to fetch doc from Kibana, I'm not able to fetch more than 10 records at a time. So I have used scan operation and also provide the size in url, now I'm getting scan operation not supported error.
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "No search type for [scan]"
}
],
"type" : "illegal_argument_exception",
"reason" : "No search type for [scan]"
},
So is there any way to get more than 10 docs from Kibana at the same time. So I'm using Kibana 7.7.0 management. Thanks in Advance.
search_type=scan was supported til Elasticsearch v2.1, and then removed.
Probably you're using something higher than ES 2.1.
https://www.elastic.co/guide/en/elasticsearch/reference/2.1/search-request-search-type.html
Related
I was asked to migrate to data streams in elasticsearch. I am a newbie in elasticsearch, and still learning about it. Only useful article I could find: https://spinscale.de/posts/2021-07-07-elasticsearch-data-streams-explained.html#data-streams-in-kibana
Since we are using elasticsearch under basic license, I got error when I was following along with tutorial and creating a ILM policy.
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "policy [csc-stream-policy] defines the [searchable_snapshot] action but the current license is non-compliant for [searchable-snapshots]"
}
],
"type" : "illegal_argument_exception",
"reason" : "policy [csc-stream-policy] defines the [searchable_snapshot] action but the current license is non-compliant for [searchable-snapshots]"
},
"status" : 400
}
Can anyone give me an idea what else I could do to active data streams in elasticsearch? I can confirm that searchable snapshots are not supported in free license. Is there another way around it?
Thanks in advance!
I am trying to create an alias with a filter of an index pattern metrics-* . I was able to do it yesterday and the day before without any problems but I can't do it again today, even if I re-run the same queries as yesterday. I have no problem creating an alias of logs-* . But when I try to create a metrics-* alias, I get an HTTP 400 code with this as response:
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "expressions [metrics-system.filesystem-default, metrics-system.cpu-default, metrics-endpoint.policy-default, metrics-endpoint.metrics-default, metrics-windows.perfmon-default, metrics-azure.compute_vm-default, metrics-system.process.summary-default, metrics-elastic_agent.endpoint_security-default, metrics-endpoint.metadata-default, metrics-endpoint.metadata_current_default, metrics-azure.storage_account-default, metrics-system.memory-default, metrics-system.uptime-default, metrics-elastic_agent.elastic_agent-default, metrics-windows.service-default, metrics-elastic_agent.metricbeat-default, metrics-system.fsstat-default, metrics-system.process-default, metrics-elastic_agent.filebeat-default, metrics-system.network-default, metrics-system.diskio-default, metrics-system.load-default, metrics-system.socket_summary-default] that match with both data streams and regular indices are disallowed"
}
],
"type" : "illegal_argument_exception",
"reason" : "expressions [metrics-system.filesystem-default, metrics-system.cpu-default, metrics-endpoint.policy-default, metrics-endpoint.metrics-default, metrics-windows.perfmon-default, metrics-azure.compute_vm-default, metrics-system.process.summary-default, metrics-elastic_agent.endpoint_security-default, metrics-endpoint.metadata-default, metrics-endpoint.metadata_current_default, metrics-azure.storage_account-default, metrics-system.memory-default, metrics-system.uptime-default, metrics-elastic_agent.elastic_agent-default, metrics-windows.service-default, metrics-elastic_agent.metricbeat-default, metrics-system.fsstat-default, metrics-system.process-default, metrics-elastic_agent.filebeat-default, metrics-system.network-default, metrics-system.diskio-default, metrics-system.load-default, metrics-system.socket_summary-default] that match with both data streams and regular indices are disallowed"
},
"status" : 400
}
Here is the request body :
PUT metrics-*/_alias/perso-metrics
{
"filter": {
"term": {
"agent.name" : "minecraft-server"
}
}
}
Thanks in advance for your help
Looks like some of your indices which are starting with name metrics is not data-streams and are regular indices, in Alias request you can't have both of them, if you try to create aliases separately for data-stream and regular indices it will work.
I'm having trouble with what seems like a fairly basic use case, but I'm hitting certain limitations in Kibana and problems with certain geo data types. It's starting to feel like I'm just approaching it wrong.
I have a relatively large point data set (locations) of type geo_point, with a map and dashboard built. I now want to add a complex AOI. I took the shapefile, dissolved it so it became one feature instead of many, converted it to geojson and uploaded it (to create an index) via the Kibana Maps functionality. I then made it available as layer, and wanted to just allow it to be selected, show tooltip, and then Filter by Feature. Unfortunately I then received an error saying along the lines that this would be too large an operation to be posted to the URL - which I understand as there are over 2 million characters in the geojson.
Instead I thought I could write the query somehow according to the guidance on: https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-geo-shape-query.html
with the pre-indexed shape.
However, it doesn't seem to work to allow geo_point to be queried against geo_shape.
e.g.
GET /locations_index/_search
{
"query": {
"geo_point": {
"geolocation": {
"relation": "within",
"indexed_shape": {
"index": "aoi_index",
"id": "GYruUnMBfgunZ6kjA8qn",
"path": "coordinates"
}
}
}
}
}
Gives an error of:
{
"error" : {
"root_cause" : [
{
"type" : "parsing_exception",
"reason" : "no [query] registered for [geo_point]",
"line" : 3,
"col" : 18
}
],
"type" : "parsing_exception",
"reason" : "no [query] registered for [geo_point]",
"line" : 3,
"col" : 18
},
"status" : 400
}
Do I need to convert my points index to be geoshape instead of geopoints? Or is there a simpler way?
I note the documentation at: https://www.elastic.co/guide/en/elasticsearch/guide/current/filter-by-geopoint.html suggests that I can query by geo_polygon, but I can't see any way of referencing my pre-indexed shape, instead of having the huge chunk of JSON in the query (as the example suggests).
Can anyone point me (even roughly) in the right direction?
Thanks in advance.
Here's how you can utilize indexed_shape. Let me know if this answer is sufficient to get you started.
We were using ElasticSearch 6.X deployed on my own server.
We migrate recently in the cloud. So the version used is 7.X.
We have a huge query with aggregates that was working on 6.X but this query is not working anymore.
This is due to a Breaking changes between version.
https://www.elastic.co/guide/en/elasticsearch/reference/current/breaking-changes-7.0.html#breaking_70_aggregations_changes
search.max_buckets in the cluster settingedit
The dynamic cluster setting named search.max_buckets now defaults to 10,000 (instead of unlimited in the previous version). Requests that try to return more than the limit will fail with an exception.
So when, we execute the query with aggregates, we have the exception:
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "xxxxxxx",
"node" : "xxxxxxxxxxxxxxxx",
"reason" : {
"type" : "too_many_buckets_exception",
"reason" : "Trying to create too many buckets. Must be less than or equal to: [10000] but was [10001]. This limit can be set by changing the [search.max_buckets] cluster level setting.",
"max_buckets" : 10000
}
}
We don't have time to change query so how can we configure the parameter on ElasticCloud?
Or can I add a parameter to the query?
Thanks for your help.
I found the answer on the ElasticSearch website:
https://discuss.elastic.co/t/increasing-max-buckets-for-specific-visualizations/187390
We're currently using a Couchbase Plugin (transport-couchbase) to transport and index the data into ElasticSearch (http://docs.couchbase.com/couchbase-elastic-search/)
I've taken a look at ElasticSearch's mapping documentation here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping.html
My understanding is that if you rely on defaults for ElasticSearch, once a document gets indexed, ElasticSearch will create a dynamic mapping for that document type. This is what we've defaulted to.
We ran into issues where after adding a specific document type, and when the transport plugin inserts an "invalid" document (the document's field type is now different -- from string -> array), ElasticSearch throws an exception and essentially breaks the replication from Couchbase to ElasticSearch. The exception looks like this:
Caused by: org.elasticsearch.ElasticsearchIllegalArgumentException: unknown property
[xyz]
java.lang.RuntimeException: indexing error MapperParsingException[failed to parse
[doc.myfield]]; nested: ElasticsearchIllegalArgumentException[unknown property[xyz]]
Is there a way we can configure ElasticSearch so that "invalid" documents simply get filtered without throwing exception and breaking the replication?
Thanks.
{
"tweet" : {
"dynamic": "strict",
"properties" : {
"message" : {"type" : "string", "store" : true }
}
}
}
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-dynamic-mapping.html