auditbeat failure in ELK : index_not_found_exception - elasticsearch

I followed the guidelines to install auditbeats in ELK to send my auditd logs to ELK, but unfortunately I just can't seem to be able to make it work. I checked my config files multiple times and I just can't wrap my head around it. When I lookup the index "auditbeat-*" in Kibana, it finds no results at all.
When I check the state of the module itself, I get :
curl localhost:9200/auditbeat-6.2.1-2018.02.14/_search?pretty
{
"error" : {
"root_cause" : [ {
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "auditbeat-6.2.1-2018.02.14",
"index" : "auditbeat-6.2.1-2018.02.14"
} ],
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "auditbeat-6.2.1-2018.02.14",
"index" : "auditbeat-6.2.1-2018.02.14"
},
"status" : 404
}
so I am not sure where to take it from there. I tried sending those via both ElasticSearch and Logstach but I keep getting the same results no matter what.
Thanks,

so it turns out this is happening because the port is bound to address 127.0.0.1 instead of 0.0.0.0.

Related

Logstash monitoring not working for pipeline, reloads and events

I am trying to use logstash pipeline monitoring using the official api guide for ES
7.17 https://www.elastic.co/guide/en/logstash/7.17/node-info-api.html
The jvm, process and os monitoring works for me, but the pipeline, reloads and events monitoring returns
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "request [/_node/stats/pipelines] contains unrecognized metric: [pipelines]"
}
],
"type" : "illegal_argument_exception",
"reason" : "request [/_node/stats/pipelines] contains unrecognized metric: [pipelines]"
},
"status" : 400
}
The correct URL is
_node/stats/pipelines
Not
_nodes/stats/pipelines
^
|

Unnable to list snapshots in Elasticsearch repository

I have an Elasticsearch snapshot repository configured to folder on NFS share, and i'm unable to do list snapshots because of strange errors
curl -X GET "10.0.1.159:9200/_cat/snapshots/temp_elastic_backup?v&s=id&pretty"
{
"error" : {
"root_cause" : [
{
"type" : "parse_exception",
"reason" : "start object expected"
}
]
"type" : "parse_exception",
"reason" : "start object expected"
},
"status" : 400
}

AWS Elasticsearch frequently getting into yellow state

I have AWS Elasticsearch running with 3 Master nodes(C4.large) and 10
data nodes(C5.large). Recently I am experiencing frequently that my
domain gets into yellow state for some time around 30 mins and then i
do nothing it changes to green.
when i used the query GET /_cluster/allocation/explain?pretty this is what i see
{
"index" : "lgst-",
"shard" : 4,
"primary" : false,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "NODE_LEFT",
"at" : "2021-01-06T13:15:38.721Z",
"details" : "node_left [**************]",
"last_allocation_status" : "no_attempt"
},
"can_allocate" : "yes",
"allocate_explanation" : "can allocate the shard",
"target_node" : {
"id" : "****************",
"name" : "********************"
},
I couldn't understand what does it mean and how do i over come it. Any help please would be appreciated.
Looks like you are using spot instances in your cluster and cause for this is that nodes in your AWS is not stable as shown clearly in the unassigned_info
"unassigned_info" : {
"reason" : "NODE_LEFT",
"at" : "2021-01-06T13:15:38.721Z",
"details" : "node_left [**************]",
"last_allocation_status" : "no_attempt"
},
I would suggest chaning the instance types if you are using ec2 spot instances and check why nodes are getting disconnected in your cluster with AWS support.

Packetbeat does not point to a write index

I tried to edit packetbeat Policy, and then from index management on Kibana I removed the policy from that index and then added it again (to take on consideration the new configuration),
unfortunately I a getting a lifecycle error
illegal_argument_exception: rollover target [packetbeat-7.9.2] does not point to a write index
I tried to run :
PUT packetbeat-7.9.2-2020.11.17-000002
{
"aliases": {
"packetbeat-7.9.2": {
"is_write_index": true
}
}
}
But I got the error:
{
"error" : {
"root_cause" : [
{
"type" : "resource_already_exists_exception",
"reason" : "index [packetbeat-7.9.2-2020.11.17-000002/oIsVi0TVS4WHHwoh4qgyPg] already exists",
"index_uuid" : "oIsVi0TVS4WHHwoh4qgyPg",
"index" : "packetbeat-7.9.2-2020.11.17-000002"
}
],
"type" : "resource_already_exists_exception",
"reason" : "index [packetbeat-7.9.2-2020.11.17-000002/oIsVi0TVS4WHHwoh4qgyPg] already exists",
"index_uuid" : "oIsVi0TVS4WHHwoh4qgyPg",
"index" : "packetbeat-7.9.2-2020.11.17-000002"
},
"status" : 400
}
Could you tell me how Can I solve this issue please ?
Thanks for your help

elasticsearch field mapping affects acorss different types in same index

I was told that "Every type has its own mapping, or schema definition" at the official guide.
But the fact I've met is the mapping can affect other types within the same index. Here is the situation:
Mapping definition:
[root#localhost agent]# curl localhost:9200/agent*/_mapping?pretty
{
"agent_data" : {
"mappings" : {
"host" : {
"_all" : {
"enabled" : false
},
"properties" : {
"ip" : {
"type" : "ip"
},
"node" : {
"type" : "string",
"index" : "not_analyzed"
}
}
},
"vul" : {
"_all" : {
"enabled" : false
}
}
}
}
}
and then I index a record:
[root#localhost agent]# curl -XPOST 'http://localhost:9200/agent_data/vul?pretty' -d '{"ip": "1.1.1.1"}'
{
"error" : {
"root_cause" : [ {
"type" : "mapper_parsing_exception",
"reason" : "failed to parse [ip]"
} ],
"type" : "mapper_parsing_exception",
"reason" : "failed to parse [ip]",
"caused_by" : {
"type" : "number_format_exception",
"reason" : "For input string: \"1.1.1.1\""
}
},
"status" : 400
}
Seems that it tries to parse the ip as a number. So I put a number in this field:
[root#localhost agent]# curl -XPOST 'http://localhost:9200/agent_data/vul?pretty' -d '{"ip": "1123"}'
{
"error" : {
"root_cause" : [ {
"type" : "remote_transport_exception",
"reason" : "[Argus][127.0.0.1:9300][indices:data/write/index[p]]"
} ],
"type" : "illegal_argument_exception",
"reason" : "mapper [ip] cannot be changed from type [ip] to [long]"
},
"status" : 400
}
This problem goes away if I explicitly define the ip field of vul type as ip field-type.
I don't quite understand the behavior above. Do I miss something?
Thanks in advance.
The statement
Every type has its own mapping, or schema definition
is true. But this is not complete information.
There may be conflicts between different types with the same field within one index.
Mapping - field conflicts
Mapping types are used to group fields, but the fields in each
mapping type are not independent of each other. Fields with:
the same name
in the same index
in different mapping types
map to the same field internally, and must have the same mapping. If a
title field exists in both the user and blogpost mapping types, the
title fields must have exactly the same mapping in each type. The only
exceptions to this rule are the copy_to, dynamic, enabled,
ignore_above, include_in_all, and properties parameters, which may
have different settings per field.

Resources