geoip.location is of geo_point datatype when an event is sent from logstash to elasticsearch with default indexName. As geoip.location has geo_point datatype, i can view the plotting of location in maps in kibana as kibana looks for geo_point datatype for maps.
geoip.location becomes geoip.location.lat, geoip.location.lon with number datatype, when an event is sent from logstash to elasticsearch with modified indexName. Due to this i'm not able to view the plotting of location in maps in kibana.
i don't understand why elasticsearch would behave differently when i try to add data to a modifiedIndexName. is this a bug with elasticsearch?
For my usecase i need to use modified indexname, as i need new index for each day. The plan is to store the logs of a particular day in a single index. so, if there are 7 days then i need to have 7 indexes that contains logs of each day (new index should be created based on currentdate).
i searched around for solution, but i'm not able to comprehend and make it to work for me. Kindly help me out on this
Update (what i did after reading xeraa's answer?)
In the devtools in kibana,
GET _template/logstash - showed the allowed patterns in index_patterns property along with other properties
i included my pattern (dave*) inside index_patterns and triggered the PUT request. You have to pass the entire existing body content (which you would receive in the GET request) inside PUT request along with your required index_patterns, otherwise the default setting will disappear as the PUT api will replace whatever data you pass in the PUT body
PUT _template/logstash
{
...
"index_patterns": [
"logstash-*","dave*"
],
...
}
I'd guess that there is a template set for the default name, which isn't happening if you rename it.
Check with GET _template if any match your old index name and update the setting so that it also gets applied to the new one.
I have added mixed index in JanusGraph to support full-text search with Elasticsearch.
I have mixed index like:
myindex = mgmt.buildIndex("myesindex", Vertex.class)
.addKey("name", Mapping.TEXTSTRING.asParameter())
.addKey("sabindex", Mapping.TEXTSTRING.asParameter())
.buildMixedIndex("search");
I am able to load data into Elasticsearch engine.
Also I am able to execute the query successfully.
The issue I am facing is when I hit query :
g.V().has('code','abc').valueMap()
==>{str=[some text], code=[abc], sab=[sab], sabindex=[sabindex], name=[[some tex]]}
I am getting the result successfully, but when I try to search with name and code:
g.V().has('name', textContains('some text')).has('code','abc').valueMap()
code field is also indexed(composite)
At that time I am getting no result. Though data is present in graph and Elasticsearch.
And another scenario is same query with different name and code works successfully. I also rebuild the graph multiple times but not getting positive results.
The first query shows the value is name=[[some tex]]. It is missing the final t in text, so that explains why the query isn't matching on some text.
If you instead do textContains('some tex'), you would get the same result as the first query. Using the profile() step would show that the myindex was utilized.
See this gist of the recreate scenario.
A somewhat similar question has been asked here but there's no answer for that yet. That question relates to an older version of Kibana so I hope you can help me.
I'm trying to setup some predefined queries in the Kibana dashboard. I'm using Kibana 5.1. The purpose of those queries is filtering some logs based on multiple different parameters.
Let's see a query I'd like to execute:
{
"index": "${index_name}",
"query": {
"query_string": {
"query": "message:(+\"${LOG_LEVEL}\")",
"analyze_wildcard": true
}
}
}
I know I can query directly in the dashboard something like "message:(+"ERROR")" and manually change the ERROR to WARN for example, but I don't want that - imagine that this query might be more complex and contain multiple fields.
Note that the data stored in the message is not structured - think of the message as a whole log line. This means I don't have fields like LOG_LEVEL which I could filter directly.
Is there any way I can set the index_name and LOG_LEVEL dynamically from the Kibana Discover dashboard?
You should go to discover, open one document and click over this button in any of the fields. After this, a filter will appear under the search bar and you can edit it and put any custom query. If you want add more filters with more custom queries you can repeat the same action with a different document or field or you can do to Settings (or Management), Saved Objects, go to the Search you saved and to the JSON representation and copy and paste the elements inside the filter array field as many times you want.
And remember that in order to apply one of the filters, you probably should disable the enabled ones (otherwise it will filter by all the enabled filters in your dashboard).
I setup elasticsearch and Kibana for indexing our application (error) logs. The issue is that Kibana doesn't display any data in the "Discover" tab.
Current situation
Elasticsearch is up and running, responds to API
executing a query directly on Elasticsearch like http://elasticserver.com:9200/applogs/_search?q=* returns lots of results (see below on how a single found record looks like)
Kibana is up and running, even finds the applogs index exposed by Elasticsearch
Kibana also shows the correct properties and data type of the applogs documents
"Discover" tab doesn't show any results...even when setting the time period to a couple of years...
Any ideas??
Here's how Kibana sees the applogs index:
Elastic search query result object looks like this:
{
_index: "applogs",
_type: "1",
_id: "AUxv8uxX6xaLDVAP5Zud",
_score: 1,
_source: {
appUid: "esb.Idman_v4.getPerson",
level: "trace",
message: "WS stopwatch is at 111ms.",
detail: "",
url: "",
user: "bla bla bla",
additionalInfo: "some more info",
timestamp: "2015-03-31T15:08:49"
}
},
..and what I see in the discover tab:
For people who have a problem like this:
Change time frame in top right corner.
By default it shows data only for last 15 min.
I wanted to put this as a comment but unfortunately, I am not able to given my deficient repo to do so. So as #Ngeunpo suggested, this is how you add a time field to an index while creating it:. If you did not do that while creating your index, I suggest you delete that index and recreate it. The index name logstash-* in the gif is analogous to your index applogs. In this case, field #timestamp is added as the time field. Let me know if this works.
EDIT: Image courtesy: This wonderful ELK setup guide
Kibana does not understand the timestamp field, if it's format is incorrect.Timestamp, which you selected by clicking on Time-field name when Configure an index pattern, need to be :
"timestamp":"2015-08-05 07:40:20.123"
then you should update your index mapping like this:
curl -XPUT 'http://localhost:9200/applogs/1/_mapping' -d'
{
"1": {
"timestamp": {
"enabled": true,
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss.SSS",
"store": true
}
}
}'
See this question and answer
UPDATE
If you are using ES 2.X, you can set the "format" to "epoch_millis" like this:
curl -XPUT 'http://localhost:9200/applogs/1/_mapping' -d'
{
"1": {
"timestamp": {
"type": "date",
"format": "epoch_millis",
"store": true,
"doc_values": true
}
}
}'
Try this: unclick "Index contains time-based events" checkbox
and then provide your index name then check "Discover" whether it contains data or not
I had same issue and this worked for me:
Delete the index from the Settings tab.
restart Kibana
then re-add in Settings
The issues with Time-Series I'm sure can also be an issue, but if no fields actually show up in the Discover tab, then you might have same issue as original reporter and what I had.
I had probably the same issue - I see data in the dashboard but 0 results in discover. Going to Managerment > Index Pattern > Refresh filed list button (a button with refresh icon only) solved it for me.
I had the same issue, and #tAn-'s comment helped me to resolve it. Changing date field to #timestamp did the trick. Thanx!
The next step should be to find out that was wrong with my custom date field.
I had the same problem, but now its working fine.
The problem was with the #timestamp. Actually I have uploaded the file to elasticsearch using logstash thus it automatically generate a #timestamp field. Kibana compare time range with this #timestamp,that is, when the actual event occurred.Even if I deselect "Index contains time-based events" option in add new index pattern page, kibana will automatically consider the #timestamp field.So toggle with timeframe on kibana based on the #timestamp field worked for me.
You can also check by adding index pattern with out a time stamp and deselect "Index contains time-based events" option.See what happens ..now there wont be any time frame select option in kibana discover page and you will most probably get the result in discover page.
These are all my observations, not sure, this solution fits your case ..you may try..
I am using ES 1.5x, logstash 1.5.1 and kibana 4.1.0
I also experienced the same error. Mostly this happens because of time format. Basically, make sure you have valid time frame for your data (top-right filter). Anyway,in my case, I used epoch time format for timestamp but it didn't work. So I changed to epoch_millisec instead and it worked like a charm.
In sum, make sure that Kibana can understand your date time format. It is required epoch_millisec by default not just epoch.
In my situation, everything was working previously and then I couldn't see the latest data starting February 1st (actually, I could if I looked back a month). It turns out that the mapping format for my custom timefield was incorrect. My mapping format was YYYY-MM-DD'T'HH:mm:ss.SSSZ. The problem was that DD is interpreted as day of the year and I wanted day of the month which is dd. Changing the mapping and reindexing fixed the problem.
In my case, I set time from server log.
and the time was different with UTC(the log's time was future comparing to UTC time)
so, when I search logs with filter of days/months/years ago. there was no log because it was future time.
so, when I use Today filter. or with future time.
It showed the logs.
after, changing the time zone. it's fixed
I had the same issue, So, as shown in one of the solutions above, I went to settings and deleted the previous index and made a new with #timestamp.
But that didnt solve the issue. So, I looked into the issue and saw, after a deployment, there was nothing coming into Kibana.
So, I went into the server, and saw that the indexes were corrupted. SO I just stopped the logstash and elasticsearch on the instance/server and restarted the service.
And Voila , It successfully restarted the service and kibana was back.
WHY DID IT HAPPEN ?
Someone might have stopped the server abruptly which caused indexes to get corrupted.
I am trying to integrate SOLR with Magento on my development machine. We are upgrading Magento and I want to test if SOLR is working as well.
I am able to feed SOLR, the stats say that it has documents. In SOLR admin, when I put in : as query string, I do get the list of documents. But when I search for "maria mosters" for example, no results are returned.
I have tried SOLR 1.4.1 (which we run in production) and 3.4.0.
My schema.xml: http://pastebin.com/3a2J99re
Thank you for your replies. I finally got my answer, for my case.
I found out by checking the query string that was being logged by SOLR. This was for example:
127.0.0.1 - - [28/09/2011:09:05:34 +0000] "GET /solr/select?sort=score+desc&fl=id&spellcheck=true&spellcheck.count=2&qt=magento_nl&spellcheck.collate=true&spellcheck.dictionary=magento_spell_nl&spellcheck.extendedResults=true&fq=visibility%3A4+AND+store_id%3A1&version=1.2&wt=json&json.nl=map&q=%28maria+mosterd%29&start=0&rows=1 HTTP/1.0" 400 1405
When I requested this query the first time, it said that the field visibility was unknown. Apparently this field was added by Magento in the upgraded release. I added the field to the config, and ran the query again. Now it said that the dictionairy magento_spell_nl did not exist.
What happened?
The new Magento has a option called "Enable Search Suggestions". In my previous Magento version, this option didn't exist, so this spellchecker thing was not passed to the query string.
When I turned this setting of, I was able to use my exact copy of the production server.
*:*
would work as its matching all on all fields.
Search for maria mosters is going to search in the default field, if you are using the standard request handler.
The default search field set in the schema is fulltext and I don't see any copyfields into it.
So are you sure the field is populated.
If you are using any custom request handler via the qt param, are the proper fields included in it ?
sharing you solrconfig and full query might help for others to help you further.
Looks like your issue is that in your schema, you have the fulltext field defined as the default search field, but you are not populating that field. I would recommend either setting the default field to another field that you are populating or when you execute your query, specify the field that you want to search against... Example text_en:"maria monsters"
Please also see the SolrQuerySyntax page on the Solr Wiki for more details.