Packetbeat does not point to a write index - elasticsearch

I tried to edit packetbeat Policy, and then from index management on Kibana I removed the policy from that index and then added it again (to take on consideration the new configuration),
unfortunately I a getting a lifecycle error
illegal_argument_exception: rollover target [packetbeat-7.9.2] does not point to a write index
I tried to run :
PUT packetbeat-7.9.2-2020.11.17-000002
{
"aliases": {
"packetbeat-7.9.2": {
"is_write_index": true
}
}
}
But I got the error:
{
"error" : {
"root_cause" : [
{
"type" : "resource_already_exists_exception",
"reason" : "index [packetbeat-7.9.2-2020.11.17-000002/oIsVi0TVS4WHHwoh4qgyPg] already exists",
"index_uuid" : "oIsVi0TVS4WHHwoh4qgyPg",
"index" : "packetbeat-7.9.2-2020.11.17-000002"
}
],
"type" : "resource_already_exists_exception",
"reason" : "index [packetbeat-7.9.2-2020.11.17-000002/oIsVi0TVS4WHHwoh4qgyPg] already exists",
"index_uuid" : "oIsVi0TVS4WHHwoh4qgyPg",
"index" : "packetbeat-7.9.2-2020.11.17-000002"
},
"status" : 400
}
Could you tell me how Can I solve this issue please ?
Thanks for your help

Related

Elasticsearch failed to update snapshot in repository error

I am using the Elasticsearch API snapshot endpoint to take backups, it was working fine for me but suddenly I am getting this error -
"error" : {
"root_cause" : [
{
"type" : "snapshot_exception",
"reason" : "[my_s3_repository:daily_backup_202205160300/yvQaLO25SQam8NU3PF7aSQ] failed to update snapshot in repository"
}
],
"type" : "snapshot_exception",
"reason" : "[my_s3_repository:daily_backup_202205160300/yvQaLO25SQam8NU3PF7aSQ] failed to update snapshot in repository",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "Unmatched second part of surrogate pair (0xDE83)",
"suppressed" : [
{
"type" : "illegal_state_exception",
"reason" : "Failed to close the XContentBuilder",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "Unclosed object or array found"
}
}
]
}
},
"status" : 500
}
This is the command I am using
curl -XPUT "localhost:9200/_snapshot/my_s3_repository/daily_backup_202205160300?wait_for_completion=true"
Any ideas why this is happening?

Getting a timestamp exception when I try to update an unrelated field using painless in elasticsearch

Im trying to run the following script
POST /data_hip/_update/1638643727.0
{
"script":{
"source":"ctx._source.avgmer=4;"
}
}
But I am getting the following error.
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "failed to parse field [#timestamp] of type [date] in document with id '1638643727.0'. Preview of field's value: '1.638642742E12'"
}
],
"type" : "mapper_parsing_exception",
"reason" : "failed to parse field [#timestamp] of type [date] in document with id '1638643727.0'. Preview of field's value: '1.638642742E12'",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "failed to parse date field [1.638642742E12] with format [epoch_millis]",
"caused_by" : {
"type" : "date_time_parse_exception",
"reason" : "Failed to parse with all enclosed parsers"
}
}
},
"status" : 400
}
this is strange because on queries (not updates) the date time is parsed fine.
The timestamp field mapping is as follows
"#timestamp": {
"type":"date",
"format":"epoch_millis"
},
I am running elasticsearch 7+
EDIT:
Adding my index settings
{
"data_hip" : {
"settings" : {
"index" : {
"routing" : {
"allocation" : {
"include" : {
"_tier_preference" : "data_content"
}
}
},
"number_of_shards" : "1",
"provided_name" : "data_hip",
"creation_date" : "1638559533343",
"number_of_replicas" : "1",
"uuid" : "CHjkvSdhSgySLioCju9NqQ",
"version" : {
"created" : "7150199"
}
}
}
}
}
Im not running an ingest pipeline
The problem is the scientific notation, the 'E12' suffix, being in a field that ES is expecting to be an integer.
Using this reprex:
PUT so_test
{
"mappings": {
"properties": {
"ts": {
"type": "date",
"format": "epoch_millis"
}
}
}
}
# this works
POST so_test/_doc/
{
"ts" : "123456789"
}
# this does not, throws the same error you have IRL
POST so_test/_doc/
{
"ts" : "123456789E12"
}
I'm not sure how/where those values are creeping in, but they are there in the document you are passing to ES.

How can I find out the size of an index in bytes with a query in elasticsearch?

how can I find out the size in bytes an index with a query from kibana? I try some queries but not return a result.
GET /my_index_name/_stats
or
GET /_cat/indices/my-index_name?v=true&s=index
Error:
{
"error" : {
"root_cause" : [
{
"type" : "security_exception",
"reason" : "current license is non-compliant for [security]",
"license.expired.feature" : "security",
"suppressed" : [
{
"type" : "security_exception",
"reason" : "current license is non-compliant for [security]",
"license.expired.feature" : "security"
}
]
}
],
"type" : "security_exception",
"reason" : "current license is non-compliant for [security]",
"license.expired.feature" : "security",
"suppressed" : [
{
"type" : "security_exception",
"reason" : "current license is non-compliant for [security]",
"license.expired.feature" : "security"
}
]
},
"status" : 403
}
what can i do to solve this problem please help me!
You need to add the following to your elasticsearch.yml configuration file and restart your node.
xpack.security.enabled: false

ElasticSearch, simple two fields comparison with painless

I'm trying to run a query such as SELECT * FROM indexPeople WHERE info.Age > info.AgeExpectancy
Note the two fields are NOT nested, they are just json object
POST /indexPeople/_search
{
"from" : 0,
"size" : 200,
"query" : {
"bool" : {
"filter" : [
{
"bool" : {
"must" : [
{
"script" : {
"script" : {
"source" : "doc['info.Age'].value > doc['info.AgeExpectancy'].value",
"lang" : "painless"
},
"boost" : 1.0
}
}
],
"adjust_pure_negative" : true,
"boost" : 1.0
}
}
],
"adjust_pure_negative" : true,
"boost" : 1.0
}
},
"_source" : {
"includes" : [
"info"
],
"excludes" : [ ]
}
}
However this query fails as
{
"error" : {
"root_cause" : [
{
"type" : "script_exception",
"reason" : "runtime error",
"script_stack" : [
"org.elasticsearch.index.fielddata.ScriptDocValues$Longs.get(ScriptDocValues.java:121)",
"org.elasticsearch.index.fielddata.ScriptDocValues$Longs.getValue(ScriptDocValues.java:115)",
"doc['info.Age'].value > doc['info.AgeExpectancy'].value",
" ^---- HERE"
],
"script" : "doc['info.Age'].value > doc['info.AgeExpectancy'].value",
"lang" : "painless",
"position" : {
"offset" : 22,
"start" : 0,
"end" : 70
}
}
],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "indexPeople",
"node" : "c_Dv3IrlQmyvIVpLoR9qVA",
"reason" : {
"type" : "script_exception",
"reason" : "runtime error",
"script_stack" : [
"org.elasticsearch.index.fielddata.ScriptDocValues$Longs.get(ScriptDocValues.java:121)",
"org.elasticsearch.index.fielddata.ScriptDocValues$Longs.getValue(ScriptDocValues.java:115)",
"doc['info.Age'].value > doc['info.AgeExpectancy'].value",
" ^---- HERE"
],
"script" : "doc['info.Age'].value > doc['info.AgeExpectancy'].value",
"lang" : "painless",
"position" : {
"offset" : 22,
"start" : 0,
"end" : 70
},
"caused_by" : {
"type" : "illegal_state_exception",
"reason" : "A document doesn't have a value for a field! Use doc[<field>].size()==0 to check if a document is missing a field!"
}
}
}
]
},
"status" : 400
}
Is there a way to achieve this?
What is the best way to debug it? I wanted to print the objects or look at the logs (which aren't there), but I couldn't find a way to do neither.
The mapping is:
{
"mappings": {
"_doc": {
"properties": {
"info": {
"properties": {
"Age": {
"type": "long"
},
"AgeExpectancy": {
"type": "long"
}
}
}
}
}
}
}
perhaps you already solved the issue. The reason why the query failed is clear:
"caused_by" : {
"type" : "illegal_state_exception",
"reason" : "A document doesn't have a value for a field! Use doc[<field>].size()==0 to check if a document is missing a field!"
}
Basically there is one or more document that do not have one of the queried fields. So you can achieve the result you need by using an if to check if the fields do indeed exists. If they do not exist, you can simply return false as follows:
{
"script": """
if (doc['info.Age'].size() > 0 && doc['info.AgeExpectancy'].size() > 0) {
return doc['info.Age'].value > doc['info.AgeExpectancy'].value
}
return false;
}
"""
I tested it with an Elasticsearch 7.10.2 and it works.
What is the best way to debug it
That is a though question, perhaps someone has a better answer for it. I try to list some options. Obviously, debugging requires to read carefully the error messages.
PAINLESS LAB
If you have a pretty recent version of Kibana, you can try to use the painless lab to simulate your documents and get the errors quicker and in a more focused environment.
KIBANA Scripted Field
You can try to create a bolean scripted field in the index pattern named condition. Before clicking create remember to click "preview result":
MINIMAL EXAMPLE Create a minimal example to reduce the complexity.
For this answer I used a sample index with four documents with all possible cases.
No info: { "message": "ok"}
Info.Age but not AgeExpectancy: {"message":"ok","info":{"Age":14}}
Info.AgeExpectancy but not Age: {"message":"ok","info":{"AgeExpectancy":12}}
Info.Age and AgeExpectancy: {"message":"ok","info":{"Age":14, "AgeExpectancy": 12}}

elasticsearch field mapping affects acorss different types in same index

I was told that "Every type has its own mapping, or schema definition" at the official guide.
But the fact I've met is the mapping can affect other types within the same index. Here is the situation:
Mapping definition:
[root#localhost agent]# curl localhost:9200/agent*/_mapping?pretty
{
"agent_data" : {
"mappings" : {
"host" : {
"_all" : {
"enabled" : false
},
"properties" : {
"ip" : {
"type" : "ip"
},
"node" : {
"type" : "string",
"index" : "not_analyzed"
}
}
},
"vul" : {
"_all" : {
"enabled" : false
}
}
}
}
}
and then I index a record:
[root#localhost agent]# curl -XPOST 'http://localhost:9200/agent_data/vul?pretty' -d '{"ip": "1.1.1.1"}'
{
"error" : {
"root_cause" : [ {
"type" : "mapper_parsing_exception",
"reason" : "failed to parse [ip]"
} ],
"type" : "mapper_parsing_exception",
"reason" : "failed to parse [ip]",
"caused_by" : {
"type" : "number_format_exception",
"reason" : "For input string: \"1.1.1.1\""
}
},
"status" : 400
}
Seems that it tries to parse the ip as a number. So I put a number in this field:
[root#localhost agent]# curl -XPOST 'http://localhost:9200/agent_data/vul?pretty' -d '{"ip": "1123"}'
{
"error" : {
"root_cause" : [ {
"type" : "remote_transport_exception",
"reason" : "[Argus][127.0.0.1:9300][indices:data/write/index[p]]"
} ],
"type" : "illegal_argument_exception",
"reason" : "mapper [ip] cannot be changed from type [ip] to [long]"
},
"status" : 400
}
This problem goes away if I explicitly define the ip field of vul type as ip field-type.
I don't quite understand the behavior above. Do I miss something?
Thanks in advance.
The statement
Every type has its own mapping, or schema definition
is true. But this is not complete information.
There may be conflicts between different types with the same field within one index.
Mapping - field conflicts
Mapping types are used to group fields, but the fields in each
mapping type are not independent of each other. Fields with:
the same name
in the same index
in different mapping types
map to the same field internally, and must have the same mapping. If a
title field exists in both the user and blogpost mapping types, the
title fields must have exactly the same mapping in each type. The only
exceptions to this rule are the copy_to, dynamic, enabled,
ignore_above, include_in_all, and properties parameters, which may
have different settings per field.

Resources