Kibana No results found for my index pattern - elasticsearch

I insert this pattern to find my object when I search in kibana
curl -XPUT 'localhost:9200/exception_index?pretty' -H 'Content-Type: application/json' -d'
{
"mappings": {
"siam_exception": {
"dynamic": "true",
"properties": {
"Title": { "type": "text" },
"Date&Time": { "type": "text" },
"Tags": { "type": "text" },
"Level": { "type": "text" },
"Message": {
"dynamic": "true",
"properties": {
"Root": { "type": "text" },
"ExceptionList": { "type": "text" }
}
}
}
}
}
}
'
and this is my exception.log
"siam_exception":{
"Title": "exception",
"Date&Time": "2018-01-21 09:52:20.759950",
"Tags": "SiamCore",
"Level": "Fatal",
"Message":[
{
"Root": "( ['MQROOT' : 0x7f0a902b2d80]
(0x01000000:Name ):Properties = ( ['MQPROPERTYPARSER' : 0x7f0a902bffa0]
(0x03000000:NameValue):MessageSet = 'SIAMCOMMONMSG' (CHARACTER)",
"ExceptionList": "(0x03000000:NameValue):ReplyIdentifier = X'000000000000000000000000000000000000000000000000' (BLOB)","
}
]
}
but kibana find nothing when I search *
even if I omit ' "siam_exception": ' from first line of exception.log nothing change
what is my problem

Related

Copy field value to a new field in existing index

I have a document that has the structure with an field object with a nested field internally. The nested field is responsible for storing all interactions that occurred in an internal communication.
It happens that I need to create a new field inside the nested field, with a new type that will now be used to store the old field with a new parser.
How can I copy the data from the old field to the new field inside the nested field?
My document:
curl -XPUT 'localhost:9200/problems?pretty' -H 'Content-Type: application/json' -d '
{
"settings": {
"number_of_shards": 1
},
"mappings": {
"problem": {
"properties": {
"problemid": {
"type": "long"
},
"subject": {
"type": "text",
"index": true
},
"usermessage": {
"type": "object",
"properties": {
"content": {
"type": "nested",
"properties": {
"messageid": {
"type": "long",
"index": true
},
"message": {
"type": "text",
"index": true
}
}
}
}
}
}
}
}
}'
My New Field:
curl -XPUT 'localhost:9200/problems/_mapping/problem?pretty' -H 'Content-Type: application/json' -d '
{
"properties": {
"usermessage": {
"type": "object",
"properties": {
"content": {
"type": "nested",
"properties": {
"message_accents" : {
"type" : "text",
"analyzer" : "ignoreaccents"
}
}
}
}
}
}
}
'
Data Example:
{
"problemid": 1,
"subject": "Test",
"usermessage": [
{
"messageid": 1
"message": "Hello"
},
{
"messageid": 2
"message": "Its me"
},
]
}'
My script to copy fields:
curl -XPOST 'localhost:9200/problems/_update_by_query' -H 'Content-Type: application/json' -d '
{
"query": {
"match_all": {
}
},
"script": "ctx._source.usermessage.content.message_accents = ctx._source.usermessage.content.message"
}'
I tried the code below but it didn't work, it returns an error.
curl -XPOST 'localhost:9200/problems/_update_by_query' -H 'Content-Type: application/json' -d '
{
"query": {
"match_all": {
}
},
"script": "ctx._source.usermessage.content.each { elm -> elm.message_accents = elm.message }"
}
'
Error:
"script":"ctx._source.usermessage.content.each { elm -> elm.message_accents = elm.message }","lang":"painless","caused_by":{"type":"illegal_argument_exception","reason":"unexpected token ['{'] was expecting one of [{, ';'}]."}},"status":500}%

Simple elasticsearch input - Rejecting mapping update final mapping would have more than 1 type: [_doc, doc]

I'm trying to send data to elasticsearch but running into an issue where my number field only comes up as a string. These are the steps I took.
Step 1. Add index & map
PUT http://123.com:5101/core_060619/
{
"mappings": {
"properties": {
"date": {
"type": "date",
"format": "HH:mm yyyy-MM-dd"
},
"data": {
"type": "integer"
}
}
}
}
Result:
{
"acknowledged": true,
"shards_acknowledged": true,
"index": "core_060619"
}
Step 2. Add data
PUT http://123.com:5101/core_060619/doc/1
{
"test" : [ {
"data" : "119050300",
"date" : "00:00 2019-06-03"
} ]
}
Result:
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "Rejecting mapping update to [zyxnewcoreyxbl_060619] as the final mapping would have more than 1 type: [_doc, doc]"
}
],
"type": "illegal_argument_exception",
"reason": "Rejecting mapping update to [zyxnewcoreyxbl_060619] as the final mapping would have more than 1 type: [_doc, doc]"
},
"status": 400
}
You can not have more than one type of document in Elasticsearch 6.0.0+. If you set your document type to doc, then you can add another document by simply PUT http://123.com:5101/core_060619/doc/1, PUT http://123.com:5101/core_060619/doc/2 etc.
Elasticsearch 6.+
PUT core_060619/
{
"mappings": {
"doc": { //type of documents in index is 'doc'
"properties": {
"date": {
"type": "date",
"format": "HH:mm yyyy-MM-dd"
},
"data": {
"type": "integer"
}
}
}
}
}
Since we created mapping to have doc type of documents, now we can add new documents by simply adding /doc/_id:
PUT core_060619/doc/1
{
"test" : [ {
"data" : "119050300",
"date" : "00:00 2019-06-03"
} ]
}
PUT core_060619/doc/2
{
"test" : [ {
"data" : "111120300",
"date" : "10:15 2019-06-02"
} ]
}
Elasticsearch 7.+
Types are removed, but you can use custom like field(s):
PUT twitter
{
"mappings": {
"_doc": {
"properties": {
"type": { "type": "keyword" },
"name": { "type": "text" },
"user_name": { "type": "keyword" },
"email": { "type": "keyword" },
"content": { "type": "text" },
"tweeted_at": { "type": "date" }
}
}
}
}
PUT twitter/_doc/user-kimchy
{
"type": "user",
"name": "Shay Banon",
"user_name": "kimchy",
"email": "shay#kimchy.com"
}
PUT twitter/_doc/tweet-1
{
"type": "tweet",
"user_name": "kimchy",
"tweeted_at": "2017-10-24T09:00:00Z",
"content": "Types are going away"
}
GET twitter/_search
{
"query": {
"bool": {
"must": {
"match": {
"user_name": "kimchy"
}
},
"filter": {
"match": {
"type": "tweet"
}
}
}
}
}
Removal of mapping types

Elastic Search scaled float not accepting data

Following is the mapping for the index index_new
{
"mappings": {
"_doc": {
"properties": {
"title": {
"type": "text"
},
"name": {
"type": "text"
},
"age": {
"type": "integer"
},
"price_amount": {
"properties": {
"val": {
"type": "scaled_float",
"scaling_factor": 100
}
}
}
}
}
}
}
While adding the data to this index
curl -X POST \
http://localhost:9200/new_index1/1 \
-H 'Content-Type: application/json' \
-d '{
"title": "new data for index",
"name": "balakarthik",
"age": 24,
"price_amount": {
"val": 1005,
"scaling_factor": 100
}
}'
this returns an error
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "failed to parse"
}
],
"type": "mapper_parsing_exception",
"reason": "failed to parse",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Field [val] misses required parameter [scaling_factor]"
}
},
"status": 400
}
As the scaling factor is already provided in the mapping do we need to send the scaling factor in each requests, if so then how we need to send the scaling factor in each requests.
Definition for scaled_float is available here
I encountered this error but got past it by changing the URL.
Your PUT should go to: http://localhost:9200/new_index1/_doc/1
Also, you should change your input data to only specify the value. Something like this:
{
"title": "new data for index",
"name": "balakarthik",
"age": 24,
"price_amount": 10.05
}

"index": "not_analyzed" in elasticsearch

i have delete mapping with the cmd
curl -XDELETE 'http://localhost:9200/logstash_log*/'
in my conf ,i have defined the index as follow,
output {
elasticsearch {
hosts => localhost
index => "logstash_log-%{+YYYY.MM.dd}"
}
and try to create a new mapping , but i got the error
#curl -XPUT http://localhost:9200/logstash_log*/_mapping/log -d '
{
"properties":{
"#timestamp":"type":"date","format":"strict_date_optional_time||epoch_millis"},
"message":{"type":"string"},
"host":{"type":"ip"},
"name":{"type":"string","index": "not_analyzed"},
"type":{"type":"string"}
}
}'
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"logstash_log*","index":"logstash_log*"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"logstash_log*","index":"logstash_log*"},"status":404}
How can i fix it?
any help will be appreciated!!
You need to re-create your index like this:
# curl -XPUT http://localhost:9200/logstash_log -d '{
"mappings": {
"log": {
"properties": {
"#timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"message": {
"type": "string"
},
"host": {
"type": "ip"
},
"name": {
"type": "string",
"index": "not_analyzed"
},
"type": {
"type": "string"
}
}
}
}
}'
Although since it looks like you're creating daily indices from logstash, you're probably better off creating a template instead. Store the following content inside index_template.json
{
"template": "logstash-*",
"mappings": {
"log": {
"properties": {
"#timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"message": {
"type": "string"
},
"host": {
"type": "ip"
},
"name": {
"type": "string",
"index": "not_analyzed"
},
"type": {
"type": "string"
}
}
}
}
}
And then modify your logstash configuration like this:
output {
elasticsearch {
hosts => localhost
index => "logstash_log-%{+YYYY.MM.dd}"
manage_template => true
template_name => "logstash"
template => "/path/to/index_template.json"
template_overwrite => true
}
* is an invalid character for index name.
Index name must not contain the following characters [\, /, *, ?, \",
<, >, |, , ,]

Elasticsearch - index only part of the object

Is it possible to index only some part of the object in elasticsearch?
Example:
$ curl -XPUT 'http://localhost:9200/test/item/1' -d '
{
"record": {
"city": "London",
"contact": "Some person name"
}
}
$ curl -XPUT 'http://localhost:9200/test/item/2' -d '
{
"record": {
"city": "London",
"contact": { "phone": "some-phone-number", "name": "Other person's name" }
}
}
$ curl -XPUT 'http://localhost:9200/test/item/3' -d '
{
"record": {
"city": "Oslo",
"headquarters": { "phone": "some-other-phone-number",
"address": "some address" }
}
}
I want only city name to be searchable, all remaining part of the object I want to leave unindexed and completely arbitrary. For example some fields can change it's type from object to object.
Is it possible to write mapping that allow such behaviour?
UPDATE
My final solution looks like this:
{
"test": {
"dynamic": "false",
"properties": {
"name": {
"type": "string"
}
}
}
}
I add "dynamic": "false" on the lowest level of my mapping and it works as expected.
You can achieve this by disabling dynamic mapping on entire type or just inner object record:
"mappings": {
"doc": {
"properties": {
"record": {
"type": "object",
"properties": {
"city": {"type": "string"}
},
"dynamic": false
}
}
}
}

Resources