Kibana search in JSON field as string in APM logs - elasticsearch

I have Elasticsearch + APM + Kibana configured for my services. Every http request is traced to APM. I'm currently capturing the body of all request. The field which stores the body inside the apm index is http.request.body.original.
The field look like this:
The problem is that I can't search inside that field. Something like http.request.body.original : *testuser* doesn't work. The body could be a simple JSON. Is there a way to allow searching in that fields? I need to prepare a dashboard with the requests that contains a specific user inside the body.
Thanks.
UPDATE
Http mapping image of apm-transaction index

Related

Kibana Scripted fields values not getting populated on visualize but are populated during discover

I am using Kibana scripted fields using painless to populate a URL like this:
Url Template : https://dummy_url?branch={{value}}&id=abc
Script:
if (!doc['branch_name'].empty) {
return (doc['branch_name'].value)
}
When I access this scripted field for my index pattern in Discover, the value gets populated correctly and I can access the URL but when I am accessing the same scripted field in "data table" Visualize, the URL is missing the value populated by {{value}}.
Already tried using {{rawValue}} and doc['branch_name.keyword'] but none worked.
Can you please help on how I can populate the scripted field correctly on data table visualize?
I am using Kibana Version, 5.4.1.
The doc has values like this:
branch_name
develop
master
release/f1
release/f2

geoip.location doesnot work with modified indexnames sent via logstash

geoip.location is of geo_point datatype when an event is sent from logstash to elasticsearch with default indexName. As geoip.location has geo_point datatype, i can view the plotting of location in maps in kibana as kibana looks for geo_point datatype for maps.
geoip.location becomes geoip.location.lat, geoip.location.lon with number datatype, when an event is sent from logstash to elasticsearch with modified indexName. Due to this i'm not able to view the plotting of location in maps in kibana.
i don't understand why elasticsearch would behave differently when i try to add data to a modifiedIndexName. is this a bug with elasticsearch?
For my usecase i need to use modified indexname, as i need new index for each day. The plan is to store the logs of a particular day in a single index. so, if there are 7 days then i need to have 7 indexes that contains logs of each day (new index should be created based on currentdate).
i searched around for solution, but i'm not able to comprehend and make it to work for me. Kindly help me out on this
Update (what i did after reading xeraa's answer?)
In the devtools in kibana,
GET _template/logstash - showed the allowed patterns in index_patterns property along with other properties
i included my pattern (dave*) inside index_patterns and triggered the PUT request. You have to pass the entire existing body content (which you would receive in the GET request) inside PUT request along with your required index_patterns, otherwise the default setting will disappear as the PUT api will replace whatever data you pass in the PUT body
PUT _template/logstash
{
...
"index_patterns": [
"logstash-*","dave*"
],
...
}
I'd guess that there is a template set for the default name, which isn't happening if you rename it.
Check with GET _template if any match your old index name and update the setting so that it also gets applied to the new one.

Search a String in Kibana

Trying to search for a complete json request in kibana webapp.
Sample
Request body::
{"mobileNumber":"***** ","custType":"abc","rejectReasonDesc":"","applicationId":"*****"}
i want to filter only the request with "rejectReasonDesc":"" i.e empty reject reason desc value .
please help on this
Create index pattern to point to your index.
Click on Add filter link
Select rejectReasonDesc.keyword field as per below image

How do you allow a particular http method on elasticsearch6 for indexes with no types?

I am taking some courses on udemy on elastic search and trying to set up an elastic search project. I have gotten the bulk api to work and can successfully send batch data into an index on elastic search. But I am having trouble sending data without the bulk api. Because I have read on the elastic search docs that there is a 'false' analogy that a type is like a table in a database and an index is like a database. I decided for this project to create an index for each of the entities that I want to persist which is users and statistics. Therefore I have indices which are called statisitics and users. When I make the following request from postman
headers: Content-Type application/json
POST http://localhost:9200/users
with body:
{"id": 1, "name":"myname"}
I get an error
{"error":"Incorrect HTTP method for uri [/users] and method [POST],
allowed: [PUT, DELETE, GET, HEAD]","status":405}
How can I allow this http method?
Hello I had the same issue using PHP and JAVA, this happen simply because you must add the _type after the index name (described here) :
https://www.elastic.co/guide/en/elasticsearch/guide/current/index-doc.html
Try curl -XPOST http://localhost:9200/users/_admin or something similar ;)
See https://www.elastic.co/guide/en/elasticsearch/guide/current/mapping.html
Since version 6.2, we need to use _doc for the type.
https://discuss.elastic.co/t/cant-use-doc-as-type-despite-it-being-declared-the-preferred-method/113837
https://www.elastic.co/guide/en/elasticsearch/reference/master/removal-of-types.html#_schedule_for_removal_of_mapping_types
In fact, when you say there is "no types" allowed in ElasticSearch, it is false (at least for versions up to 8.x). What is true is that there is only 1 type allowed for each index.

Kibana doesn't show any results in “Discover” tab

I have setup elasticsearch(version 1.7.3) and Kibana(version 4.1.2) for indexing our application's Elmah XML files errors. I am using .Net to parse the xml files and Nest ElasticSearch client to insert the indexes into ElasticSearch. The issue is that Kibana doesn't display any data in the "Discover" tab.
When I run curl -XGET localhost:9200/.kibana/index-pattern/eol? command, I get the following response:
{"_index":".kibana","_type":"index-pattern","_id":"eol","_version":2,"found":tru
e,"_source":{"title":"eol","timeFieldName":"errorTime","fields":"[{\"name\":\"_i
ndex\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"indexed\":false,\"an
alyzed\":false,\"doc_values\":false},{\"name\":\"filePath\",\"type\":\"string\",
\"count\":0,\"scripted\":false,\"indexed\":true,\"analyzed\":true,\"doc_values\"
:false},{\"name\":\"_type\",\"type\":\"string\",\"count\":0,\"scripted\":false,\
"indexed\":true,\"analyzed\":false,\"doc_values\":false},{\"name\":\"message\",\
"type\":\"string\",\"count\":0,\"scripted\":false,\"indexed\":true,\"analyzed\":
true,\"doc_values\":false},{\"name\":\"errorTime\",\"type\":\"date\",\"count\":0
,\"scripted\":false,\"indexed\":true,\"analyzed\":false,\"doc_values\":false},{\
"name\":\"_source\",\"type\":\"_source\",\"count\":0,\"scripted\":false,\"indexe
d\":false,\"analyzed\":false,\"doc_values\":false},{\"name\":\"_id\",\"type\":\"
string\",\"count\":0,\"scripted\":false,\"indexed\":false,\"analyzed\":false,\"d
oc_values\":false}]"}}
Current situation
Elasticsearch is up and running, responds to API executing a query directly on Elasticsearch like http://localhost:9200/eol/_search?q=* returns lots of results
Kibana is up and running, even finds the "eol" index exposed by Elasticsearch
Kibana also shows the correct properties and data type of the "eol" documents
"Discover" tab doesn't show any results...even when setting the time period to a couple of years...
I have tried Delete the index from the Settings tab, restart Kibana, then re-adding index in Settings.
I have also tried to save the date field with a yyyy-MM-ddThh:mm:ss format but I still do not see any results.
I believe the issue is with either the Elmah UTC date format(An example is 2015-10-13T19:54:49.4547709Z) or the Elmah message. I guess ElasticSearch likes the Elmah message but Kibana does not.
Any ideas??
Here's how Kibana sees the "eol" index:
..and what I see in the discover tab:
I was using Nest to insert data into ElasticSearch. It seems that the way Nest is serializing a List and making a request to ElasticSearch has special characters that Kibana does not like.
Before(Not working):
private static void WriteErrorsIntoElasticSearchIndex(ElasticClient elasticClient, List<error> errors)
{
elasticClient.Index(errors);
}
After(working):
private static void WriteErrorsIntoElasticSearchIndex(ElasticClient elasticClient, List<error> errors)
{
foreach (var error in errors)
{
elasticClient.Index(error);
}
}
you have "\" , normally in elasticsearch result there is not, JSON can not parse the result because it is not appropriate,

Resources