I am trying to use logstash pipeline monitoring using the official api guide for ES
7.17 https://www.elastic.co/guide/en/logstash/7.17/node-info-api.html
The jvm, process and os monitoring works for me, but the pipeline, reloads and events monitoring returns
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "request [/_node/stats/pipelines] contains unrecognized metric: [pipelines]"
}
],
"type" : "illegal_argument_exception",
"reason" : "request [/_node/stats/pipelines] contains unrecognized metric: [pipelines]"
},
"status" : 400
}
The correct URL is
_node/stats/pipelines
Not
_nodes/stats/pipelines
^
|
Related
enrich.fetch_size - Maximum batch size when reindexing a source index into an enrich index. Defaults to 10000.
When the value is changed in elasticsearch.yml to ex. 20000, the error appears when executing ingest policy
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "Batch size is too large, size must be less than or equal to: [10000] but was [20000]. Scroll batch sizes cost as much memory as result windows so they are controlled by the [index.max_result_window] index level setting."
}
],
"type" : "search_phase_execution_exception",
"reason" : "Partial shards failure",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "name-of-index",
"node" : "node-id",
"reason" : {
"type" : "illegal_argument_exception",
"reason" : "Batch size is too large, size must be less than or equal to: [10000] but was [20000]. Scroll batch sizes cost as much memory as result windows so they are controlled by the [index.max_result_window] index level setting."
}
}
]
},
"status" : 400
}
config file:
...
discovery:
seed_hosts:
- "127.0.0.1"
- "[::1]"
- elasticsearch
script:
context:
template:
max_compilations_rate: 400/5m
cache_max_size: 400
enrich:
fetch_size: 20000
...
This is pretty common mistake, I think you have not restarted you Elasticsearch server and the new changes of elasticsearch.yml is not loaded.
If its not resolved after restart then share your config file. Will have to take a look at it.
I am using the Elasticsearch API snapshot endpoint to take backups, it was working fine for me but suddenly I am getting this error -
"error" : {
"root_cause" : [
{
"type" : "snapshot_exception",
"reason" : "[my_s3_repository:daily_backup_202205160300/yvQaLO25SQam8NU3PF7aSQ] failed to update snapshot in repository"
}
],
"type" : "snapshot_exception",
"reason" : "[my_s3_repository:daily_backup_202205160300/yvQaLO25SQam8NU3PF7aSQ] failed to update snapshot in repository",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "Unmatched second part of surrogate pair (0xDE83)",
"suppressed" : [
{
"type" : "illegal_state_exception",
"reason" : "Failed to close the XContentBuilder",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "Unclosed object or array found"
}
}
]
}
},
"status" : 500
}
This is the command I am using
curl -XPUT "localhost:9200/_snapshot/my_s3_repository/daily_backup_202205160300?wait_for_completion=true"
Any ideas why this is happening?
I have an Elasticsearch snapshot repository configured to folder on NFS share, and i'm unable to do list snapshots because of strange errors
curl -X GET "10.0.1.159:9200/_cat/snapshots/temp_elastic_backup?v&s=id&pretty"
{
"error" : {
"root_cause" : [
{
"type" : "parse_exception",
"reason" : "start object expected"
}
]
"type" : "parse_exception",
"reason" : "start object expected"
},
"status" : 400
}
I tried to edit packetbeat Policy, and then from index management on Kibana I removed the policy from that index and then added it again (to take on consideration the new configuration),
unfortunately I a getting a lifecycle error
illegal_argument_exception: rollover target [packetbeat-7.9.2] does not point to a write index
I tried to run :
PUT packetbeat-7.9.2-2020.11.17-000002
{
"aliases": {
"packetbeat-7.9.2": {
"is_write_index": true
}
}
}
But I got the error:
{
"error" : {
"root_cause" : [
{
"type" : "resource_already_exists_exception",
"reason" : "index [packetbeat-7.9.2-2020.11.17-000002/oIsVi0TVS4WHHwoh4qgyPg] already exists",
"index_uuid" : "oIsVi0TVS4WHHwoh4qgyPg",
"index" : "packetbeat-7.9.2-2020.11.17-000002"
}
],
"type" : "resource_already_exists_exception",
"reason" : "index [packetbeat-7.9.2-2020.11.17-000002/oIsVi0TVS4WHHwoh4qgyPg] already exists",
"index_uuid" : "oIsVi0TVS4WHHwoh4qgyPg",
"index" : "packetbeat-7.9.2-2020.11.17-000002"
},
"status" : 400
}
Could you tell me how Can I solve this issue please ?
Thanks for your help
I followed the guidelines to install auditbeats in ELK to send my auditd logs to ELK, but unfortunately I just can't seem to be able to make it work. I checked my config files multiple times and I just can't wrap my head around it. When I lookup the index "auditbeat-*" in Kibana, it finds no results at all.
When I check the state of the module itself, I get :
curl localhost:9200/auditbeat-6.2.1-2018.02.14/_search?pretty
{
"error" : {
"root_cause" : [ {
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "auditbeat-6.2.1-2018.02.14",
"index" : "auditbeat-6.2.1-2018.02.14"
} ],
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "auditbeat-6.2.1-2018.02.14",
"index" : "auditbeat-6.2.1-2018.02.14"
},
"status" : 404
}
so I am not sure where to take it from there. I tried sending those via both ElasticSearch and Logstach but I keep getting the same results no matter what.
Thanks,
so it turns out this is happening because the port is bound to address 127.0.0.1 instead of 0.0.0.0.