Connecting Elasticsearch to Kibana - elasticsearch

I am trying to display the iris data in kibana by connecting to elasticsearch and creating an index called "iris" in R. I am taking the following steps:
Execute elasticsearch 5.5.3 batch file (localhost:9200 displays results on web)
Run the following code in R (connects and displays the iris search result successfully)
**library(elasticsearchr)
es<- elastic("http://localhost:9200", "iris", "data")
es %index% iris
for_everything <- query('{
"match_all": {}
}')
es %search% for_everything**
Run Kibana 5.5.3 batch file (checked yml file which says #elasticsearch.url: "http://localhost:9200")
However, Kibana can't search the index "iris" as shown below:
I tried running the logstash 5.5.3 batchfile before step 3, but it generated an error message on command prompt and closed. Another weird thing is that I don't see any index created on localhost:9200 on web, while searching for index in R shows results. Plus, below is the message I get when I start in step 1.
FYI, result of http://localhost:9200/_cat/indices
This is a snapshot of my kibana management > index pattern page.

you should add index pattern to kibana via management -> kibana -> index patterns.
at the moment you are searching "iris" word on none of your index. also i think you must change your search phrase.

Related

elasticsearch search by part of a word

Could you help me? There is an ELK cluster (version 5) and through kibana I execute a query for a part of a word using a wildcard, for example, examp*, but nothing is found. If I search for the whole word example, then everything is found. I also have a second ELK cluster and everything is found correctly in the part of the word using wildcard. I don't understand what is the difference between the settings between these two clusters

How data is getting mapped in Elastic search in ELK?

I am new to the ELK and i am in the progress of learning it. In my project, they are importing the data from Amazon S3 -> File Beat -> logstash -> Elastic search -> Kibanna.
In the logstash file, they have directly importing the data and sending to the Elastic search something like below and there was no indexes mentioned in the config file,
output elasticsearch
{
hosts => ["http://localhost:9200"]
}
In Amazon s3, we have logs from Salesforce and in future we are going to implement from multiple sources.
In Elastic search, i could see 41 indexes(Used Get Curl script) is present. Assume if we keep the same setup in logstash, then all logs(Multiple sources) will be sent to elastic search in same manner. I would like to know how the data is getting mapped to the particular index in elastic search ??
In many tutorials, they have given indexes in the logstash config file so in kibanna we could see the index name along with timestamp. I have tried to check by placing a sample Mulesoft log file in Amazon S3 but i cant able to find those data in Kibanna. So shall i need to create one more new index with a name Mule along with mappings??
There is no ELK expert in my project so please guide me on how to approach this one or any references will be more helpful.
This page (https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) documents Logstash's Elasticsearch output plugin.
As you can see in the Configuration Options section, the option index is not mandatory. If this option is not specified, its default-value is logstash-%{+YYYY.MM.dd}.
With that being said, the documents will get indexed into indices with the prefix 'logstash-' followed by the date of ingestion. For example:
logstash-2020.04.07
logstash-2020.04.08
Since someone in your organization has chosen to go with the default value, this option can be left out. This explains why you can't find a particular index name in the Logstash configuration. If you need to index documents into different indices, then you'd have to set a particular value for the index option.
Elasticsearch will automatically create these indices with a dynamic mapping (https://www.elastic.co/guide/en/elasticsearch/reference/current/dynamic-mapping.html) if you haven't setup an explicit mapping via index templates in advance. In order to see the data in kibana, you first need to create an index pattern matching the index name.
I hope I could help you.

elastic search log configuration is not working

I tried to enable logs in the elastic search server using the below link
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-slowlog.html
I verified my index setting using the url
http://localhost:9200/_all/_settings
The result is below
{"myindex":{"settings":{"index":{"search":{"slowlog":{"threshold":{"fetch":{"warn":"1ms","trace":"1ms","debug":"1ms","info":"1ms"},"query":{"warn":"1ms","trace":"1ms","debug":"1ms","info":"1ms"}}}},"number_of_shards":"3","provided_name":"occindex","creation_date":"1508319257925","number_of_replicas":"2","uuid":"dVAWgk62Sgivzr2B_OuCzA","version":{"created":"5040399"}}}}}
As per the document, I expect the logs to be populated when the threshold is breached.
I have set 1 ms as the threshold in order to log all queries that are hitting elastic search
I observed that under logs folder , the log files elasticsearch_index_search_slowlog.log and elasticsearch.log does not show the queries which are hitting elastic search.
Let me know if my configuration is correct.
The log worked after I inserted one record.
If you fire the query when there are no records in the index , the log was not updated

Kibana Discover is not working after deleted all indices and adding new indices

I ran the following command to delete all indices to see changes on the Kibana:
$ curl -XDELETE localhost:9200/_all
after this operation, Kibana being not visualized Data. It is not working anymore.
I extracted new kibana folder and set up again all configuration and see the indices on the Dev Tools tab. But These indices are not showing in Discover tab.
Sefa.
You need to recreate your index pattern under Management -> Index Patterns — see the documentation for index patterns.
Your index should be logstash-* and the timestamp value #timestamp in the dropdown.
To get better understanding of your problem open the page in the console of your web browser. You will see the error as "some default index not found"
Kibana always loads on a default index. You have eliminated even that pattern.
So set the default index patter again and you will be good to go

Upgrade of elastic search from 1.3.2 to 1.4.0.beata 1 and installing Kibana 4 beta version

I installed kibana-4.0.0-BETA1.then realised that it needs elasticsearch-1.4.0.Beta1. So I upgraded elasticsearch 1.3.2 to newer version. kibana 4 is not working and i messed up my old elastic search and kibana3 also.
Problems with elasticsearch 1.3.2 and kibana 3
I am able to create an index and that i can view in kibana but if I Darg Zoom , it is saying no indexes present in this time period. earlier it used to work fine.
unable to save dashborad on kibana
kibana-init in elastic search's health is red. So i deleted that index.
I installed new elk stack on other brand new machine.
Problems with elasticsearch-1.4.0.Beta1 and kibana-4.0.0-BETA1
most of the times kibana4 is not able to find elasticsearch
Unable to SAVE visualization. getting the following exception but visualization is saved
TypeError: Cannot read property 'byName' of undefined
at BaseAggParam.FieldAggParamFactory.FieldAggParam.deserialize
if I try to access any saved visualization, the same or above exception is showing.
Thanks in advance
When you load Kibana 4, there's an ajax request to /config. It should return something like this:
apps: [{id:discover, name:Discover}, {id:visualize, name:Visualize}, {id:dashboard, name:Dashboard},…]
0: {id:discover, name:Discover}
1: {id:visualize, name:Visualize}
2: {id:dashboard, name:Dashboard}
3: {id:settings, name:Settings}
defaultAppId: "discover"
elasticsearch: "http://blah/elasticsearch"
kibanaIndex: "kibana-int"
port: 9200
You see the "kibanaIndex" in there? Make sure your Kibana 3 and Kibana 4 uses different index. Or they'll try load each other's data.
In Kibana 3 root, there's a config.js. Find the line: kibana_index: "kibana-int"
And maybe change that too. (one or the other.) Either way. Kibana saves its state and dashboard info in an index of its own. Make sure K3 and K4 have index different from each other.
Edit- btw, the error you are seeing is a bug. https://github.com/elasticsearch/kibana/pull/1617 Seems like it's fixed.
Double Edit- This only applied back in K4 beta 1. I think I had to make other adjustments for beta 2. YMMV.

Resources