Kibana 4 : How to remove saved discover request - kibana-4

A trivial question but I can't figure how to remove or clean some saved requests in the Discover tab.
Thank's for any help.

Go into Settings, select the Objects tab, the Searches sub-tab, hit the checkbox next to anything you want to remove, and hit the delete selected button.

In recent version of Kibana you need to go to: Stack management -> Saved objects, select the saved searches that you want to delete and press the Delete button.

If you are using Kibana 5.6 and a filebeat, then use the follwing to delete your search using the devtools is as following
DELETE filebeat-*
{
"query": {
"match_all": {}
}
}
The output will be
{
"acknowledged": true
and now, if you check the discover tab, all your searches should be deleted which starts using the filebeat- index. Here you are deleting the index you created.

Related

Property not available for visualize in kibana

While trying to change a Visualization in Kibana to use another property for the x-axis, that property doesn't appear there.
I changed recently nlog to target elastic search using the Elastic common schema.
After that change the property is not longer called ResolvedRoute but instead _metadata.resolved_route, the problem is that it doesn't appear on the field for x-axis, it says no matches found.
It is not on the available fields
I'm still new to elastic search and kibana, so it's possible i'm missing something simple.
Don't know if it's related, but when on Discovermenu, looking at the Available fields all of _metadata fields have a question mark
I'm already trying to map some of these fields in Index Management / Edit template
Also, if i go to the console and type
GET /logstash-2020.11.25/_search
{
"query": {
"match_all": {}
}
}
I can see the fields of _metadata that i want, inside _source which is inside of hits.
I think i already had a similar problem where i had to delete all indexes that match the pattern and then the field appeared, but that doesn't make much sense.
What could be the problem?
Chances are high that you haven't refreshed the corresponding index pattern in Kibana. Therefore the data might exist as documents in Elasticsearch but not yet as a field in the index pattern, which is a Kibana Saved Object.
Please go to Settings / Stack Management (depending on your Kibana version), click on the index pattern you expect the field to be in and refresh the fields list (icon is in the upper right corner).
Please let me know if that solved your problem.
The fields in question were not correctly mapped in the template.
since metadata is an object it needs to be mapped like that first,
then inside of it we can map it's own properties.

Is there a way to define a dynamic query in Kibana dashboard?

A somewhat similar question has been asked here but there's no answer for that yet. That question relates to an older version of Kibana so I hope you can help me.
I'm trying to setup some predefined queries in the Kibana dashboard. I'm using Kibana 5.1. The purpose of those queries is filtering some logs based on multiple different parameters.
Let's see a query I'd like to execute:
{
"index": "${index_name}",
"query": {
"query_string": {
"query": "message:(+\"${LOG_LEVEL}\")",
"analyze_wildcard": true
}
}
}
I know I can query directly in the dashboard something like "message:(+"ERROR")" and manually change the ERROR to WARN for example, but I don't want that - imagine that this query might be more complex and contain multiple fields.
Note that the data stored in the message is not structured - think of the message as a whole log line. This means I don't have fields like LOG_LEVEL which I could filter directly.
Is there any way I can set the index_name and LOG_LEVEL dynamically from the Kibana Discover dashboard?
You should go to discover, open one document and click over this button in any of the fields. After this, a filter will appear under the search bar and you can edit it and put any custom query. If you want add more filters with more custom queries you can repeat the same action with a different document or field or you can do to Settings (or Management), Saved Objects, go to the Search you saved and to the JSON representation and copy and paste the elements inside the filter array field as many times you want.
And remember that in order to apply one of the filters, you probably should disable the enabled ones (otherwise it will filter by all the enabled filters in your dashboard).

Kibana doesn't show any results in "Discover" tab

I setup elasticsearch and Kibana for indexing our application (error) logs. The issue is that Kibana doesn't display any data in the "Discover" tab.
Current situation
Elasticsearch is up and running, responds to API
executing a query directly on Elasticsearch like http://elasticserver.com:9200/applogs/_search?q=* returns lots of results (see below on how a single found record looks like)
Kibana is up and running, even finds the applogs index exposed by Elasticsearch
Kibana also shows the correct properties and data type of the applogs documents
"Discover" tab doesn't show any results...even when setting the time period to a couple of years...
Any ideas??
Here's how Kibana sees the applogs index:
Elastic search query result object looks like this:
{
_index: "applogs",
_type: "1",
_id: "AUxv8uxX6xaLDVAP5Zud",
_score: 1,
_source: {
appUid: "esb.Idman_v4.getPerson",
level: "trace",
message: "WS stopwatch is at 111ms.",
detail: "",
url: "",
user: "bla bla bla",
additionalInfo: "some more info",
timestamp: "2015-03-31T15:08:49"
}
},
..and what I see in the discover tab:
For people who have a problem like this:
Change time frame in top right corner.
By default it shows data only for last 15 min.
I wanted to put this as a comment but unfortunately, I am not able to given my deficient repo to do so. So as #Ngeunpo suggested, this is how you add a time field to an index while creating it:. If you did not do that while creating your index, I suggest you delete that index and recreate it. The index name logstash-* in the gif is analogous to your index applogs. In this case, field #timestamp is added as the time field. Let me know if this works.
EDIT: Image courtesy: This wonderful ELK setup guide
Kibana does not understand the timestamp field, if it's format is incorrect.Timestamp, which you selected by clicking on Time-field name when Configure an index pattern, need to be :
"timestamp":"2015-08-05 07:40:20.123"
then you should update your index mapping like this:
curl -XPUT 'http://localhost:9200/applogs/1/_mapping' -d'
{
"1": {
"timestamp": {
"enabled": true,
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss.SSS",
"store": true
}
}
}'
See this question and answer
UPDATE
If you are using ES 2.X, you can set the "format" to "epoch_millis" like this:
curl -XPUT 'http://localhost:9200/applogs/1/_mapping' -d'
{
"1": {
"timestamp": {
"type": "date",
"format": "epoch_millis",
"store": true,
"doc_values": true
}
}
}'
Try this: unclick "Index contains time-based events" checkbox
and then provide your index name then check "Discover" whether it contains data or not
I had same issue and this worked for me:
Delete the index from the Settings tab.
restart Kibana
then re-add in Settings
The issues with Time-Series I'm sure can also be an issue, but if no fields actually show up in the Discover tab, then you might have same issue as original reporter and what I had.
I had probably the same issue - I see data in the dashboard but 0 results in discover. Going to Managerment > Index Pattern > Refresh filed list button (a button with refresh icon only) solved it for me.
I had the same issue, and #tAn-'s comment helped me to resolve it. Changing date field to #timestamp did the trick. Thanx!
The next step should be to find out that was wrong with my custom date field.
I had the same problem, but now its working fine.
The problem was with the #timestamp. Actually I have uploaded the file to elasticsearch using logstash thus it automatically generate a #timestamp field. Kibana compare time range with this #timestamp,that is, when the actual event occurred.Even if I deselect "Index contains time-based events" option in add new index pattern page, kibana will automatically consider the #timestamp field.So toggle with timeframe on kibana based on the #timestamp field worked for me.
You can also check by adding index pattern with out a time stamp and deselect "Index contains time-based events" option.See what happens ..now there wont be any time frame select option in kibana discover page and you will most probably get the result in discover page.
These are all my observations, not sure, this solution fits your case ..you may try..
I am using ES 1.5x, logstash 1.5.1 and kibana 4.1.0
I also experienced the same error. Mostly this happens because of time format. Basically, make sure you have valid time frame for your data (top-right filter). Anyway,in my case, I used epoch time format for timestamp but it didn't work. So I changed to epoch_millisec instead and it worked like a charm.
In sum, make sure that Kibana can understand your date time format. It is required epoch_millisec by default not just epoch.
In my situation, everything was working previously and then I couldn't see the latest data starting February 1st (actually, I could if I looked back a month). It turns out that the mapping format for my custom timefield was incorrect. My mapping format was YYYY-MM-DD'T'HH:mm:ss.SSSZ. The problem was that DD is interpreted as day of the year and I wanted day of the month which is dd. Changing the mapping and reindexing fixed the problem.
In my case, I set time from server log.
and the time was different with UTC(the log's time was future comparing to UTC time)
so, when I search logs with filter of days/months/years ago. there was no log because it was future time.
so, when I use Today filter. or with future time.
It showed the logs.
after, changing the time zone. it's fixed
I had the same issue, So, as shown in one of the solutions above, I went to settings and deleted the previous index and made a new with #timestamp.
But that didnt solve the issue. So, I looked into the issue and saw, after a deployment, there was nothing coming into Kibana.
So, I went into the server, and saw that the indexes were corrupted. SO I just stopped the logstash and elasticsearch on the instance/server and restarted the service.
And Voila , It successfully restarted the service and kibana was back.
WHY DID IT HAPPEN ?
Someone might have stopped the server abruptly which caused indexes to get corrupted.

How to delete document types in elasticsearch?

I create an index "myindex" with a specified document type "mytype". I am able to delete the index, but it appears that "mytype" still exists without being tied to the index.
How do I get rid of "mytype"?
If you really deleted the index, the mapping in this index should not exist anymore.
Do you have any other index in your cluster with a similar type name?
To answer to the question: How to delete document types in elasticsearch?, use Delete Mapping API:
curl -XDELETE http://localhost:9200/index/type
EDIT: From elasticsearch 2.0, it won't be possible anymore. See Mapping changes. You will have to install the Delete By Query plugin and run a query which will remove your documents but the mapping will still exist. So it will most likely better to reindex your documents in another index without the old type.
But as #mguillemin and #javanna said, when you delete an index, every mapping attached to this index is deleted as well:
curl -XDELETE http://localhost:9200/index
You can use _delete_by_query path to delete type.
POST index-name/type-name/_delete_by_query
{
"query": {
"match": {
"message": "some message"
}
}
}
For further reading see docs
In the latest version of elastic search, they no longer support deleting document types. It's mentioned in the documentation
It is no longer possible to delete the mapping for a type. Instead you
should delete the index and recreate it with the new mappings.
You don't even need to specify the request body. Just
curl -XPOST http://<host>:9200/<index>/<type>/_delete_by_query

ElasticSearch: How do I delete index entries from head?

I want to delete index entries directly from MOBZ's ElasticSearch head (web UI).
I tried a DELETE query in the "Any Request" section with the following:
{"query":{"term":{"supplier":"ABC"}}}
However, all I get in return is:
{
ok: true
acknowledged: true
}
and the entries do not get deleted.
What am I doing wrong?
You should have removed the "query" from your post data.
You only need it for _search, and you should be using the _query entrypoint for delete.
In that case it is obvious the post is only a query, thus making it redendant (and actually irrelevant) to explicitly state it's a query.
That is:
curl -XPOST 'localhost:9200/myindex/mydoc/_search' -d
'{"query":{"term":{"supplier":"ABC"}}}'
will work fine for search.
But to delete by query, if you try:
curl -XDELETE 'localhost:9200/myindex/mydoc/_query' -d
'{"query":{"term":{"supplier":"ABC"}}}'
it won't work (note the change in entry point to _query, as well as switch CURL parameter to delete).
You need to call:
curl -XDELETE 'localhost:9200/myindex/mydoc/_query' -d
'{"term":{"supplier":"ABC"}}'
Let me know if this helps.
If you want to do it in HEAD:
put /stock/one/_query in the any request text box next to the drop-box of "GET/PUT/POST/DELETE"
choose DELETE in the drop-down menu
the request body should be {"term":{"vendor":"Socks"}}
Your problem was that you used a request body of: {"query":{"term":{"vendor":"Socks"}}}
That is fine for search, but not for delete.
A simple way to delete from plugin head by doc Id:
Go to Any Request TAB in plugin head
Simply put http:/localhost:9200/myindex/indextype/id in the text box above DELETE drop-down
Select DELETE from drop-down
Execute the request by clicking in Request button
Here is the sample image:
I'd issue a search request first, to verify that the documents you want deleted are actually being returned by your query.
It's impossible to give clear help, since there are many things that could be going wrong, but here are some possible probelems:
You don't have the correct index/type specified in the ES Head query
You need to specify the index and type on the second input box, not the first. The first line is meant for the host address and automatically adds a trailing slash
You need to use the Delete command from the dropdown
The analyzer of your fields is altering the field text in a way that it isn't being found by the Term query.
In all likelihood, it is the last option. If you haven't specified an analyzer, the default one that ES picks is the Standard analyzer, which includes a lowercase filter. The term "ABC" is therefore never indexed, instead "abc" is indexed.
Term query is not analyzed at all, so case sensitivity is important.
If those tips don't help, post up your mapping and some sample data, and we can probably help better.

Resources