Query single entry from ELKs Elasticsearch via HTTP - elasticsearch

I'm trying to build some kind of monitor for my ELK stack. I want to know when/if my ELK is down. This will be just a simple solution. I was tasked with integrating a on/off signal within a bigger, global monitoring tool.
So I want to query my ELKs elasticsearch for the latest entry that matches one particular field value. My ELK data contains a field for each access.log row that states which server was the origin. So there is always say server_node.raw=Tomcat1 oder Tomcat2 or ...
I do get a result from my index but this seems like metadata to me. http://10.170.121.148:9100/logstash-2015.11.10/?pretty
Is there a way to query ES for the latest entry that matches server_node.raw=Tomcat1 using a simple HTTP request?
Using server_node.raw in Kibana works perfectly fine.
Anyone with an idea? I'd appreciate it.
Thanks in advance and regards. Sebastian

Yes, you are on the right path, you can simply query your logstash index with a URI search and &q=server_node.raw:... like this
curl -XGET 'http://10.170.121.148:9100/logstash-2015.11.10/_search?q=server_node.raw:Tomcat1&pretty'

Related

Can I put the result from kibana to elasicsearch again?

Can I put the response result that I query in Kibana dev tools into elasticsearch directly?
Or must I write a script to achieve it?
Any recommends?
Ok So here is one basic understanding after discussion.
Please observe carefully.
If you have head plugin installed for ES .
search for .kibana index .
open the .kibana index and you will have all the designed dashboards listed there with processd info.
Think ES as another Storage from where you can read the data and put that data into Another ES index.
Refer to this link :
https://www.elastic.co/blog/kibana-under-the-hood-object-persistence
Tools you can opt is Logstash for Reading and writing.
Grok pattern learning can give you good lead about that.
Tell me if need some real time pics for same problem.
Happy learning.
It is like you cook in kitchen and ask to put the cooked food in kitchen again.If you cooked food better consume it :)
See the visualization or processed data you see on kibana end is just for kibana.The algorithms or processing techniques for the data set residing at elastic search will be applied over the upcoming data set.
So offcourse you can put/consume your data in Elastic search back again.
It depends what sort of requirement you are facing.
Note : Data in elastic search(inverted index) after kibana processing not gonna change its architecture, due to which you are able to apply another processing techniques from kibana over the same index assuming that data is in it's earlier state.

Index existing documents on startup

I'm new to elasticsearch and this is a question I've been trying to find an answer to. Basically I have around a thousand documents that I would like elasticsearch to index for me. Do I have to write a bash/python script that would just use CURL to put/post all these documents in my elasticsearch server or can I configure my server so that it would automatically index documents in a specific folder/location on disk when I start it up for the first time?
I far as I know Elasticsearch does not have any option for pulling document to index itself. As you mentioned you need to create a script and push your documents to ES yourself.

How can I see a list of my ElasticSearch indices in Kibana?

I am starting to use ES and Kibana, so apologies in advance if this question doesn't make sense!
I'd like to be able to see in Kibana a list of my current indices, similar to what you get with:
curl 'localhost:9200/_cat/indices?v'
I was expecting to be able to see in Kibana functionality partly like a DB client where you can connect to a DB server and see all the databases, then drill down in each of them to see tables and content. I'd love to have that kind of workflow in Kibana.
The closest I can find is in Management -> Index Pattern, but it'll display a list of all fields, which is too much information and I can't see any column in the table that points to which index each field belongs.
As I said I'm just starting so it might be I'm not looking in the right place!
I don't think we have any option to see the hierarchy like you see in traditional DB application.
If you are looking for something in kibana which can give you information similar to curl 'localhost:9200/_cat/indices?v', then you can go to "Monitoring-> Indices" which will list out all the index with there stat's(document count, data size, index rate etc).
If you don't have the x-pack installed then you have to use the "Discover" tab where you can see the list of all the index from the drop down, also the _type and all the available fields in the index.

Can Beats update existing documents in Elasticsearch?

Consider the following use case:
I want the information from one particular log line to be indexed into Elasticsearch, as a document X.
I want the information from some log line further down the log file to be indexed into the same document X (not overriding the original, just adding more data).
The first part, I can obviously achieve with filebeat.
For the second, does anyone have any idea about how to approach it? Could I still use filebeat + some pipeline on an ingest node for example?
Clearly, I can use the ES API to update the said document, but I was looking for some solution that doesn't require changes to my application - rather, it is all possible to achieve using the log files.
Thanks in advance!
No, this is not something that Beats were intended to accomplish. Enrichment like you describe is one of the things that Logstash can help with.
Logstash has an Elasticsearch input that would allow you to retrieve data from ES and use it in the pipeline for enrichment. And the Elasticsearch output supports upsert operations (update if exists, insert new if not). Using both those features you can enrich and update documents as new data comes in.
You might want to consider ingesting the log lines as is to Elasticearch. Then using Logstash, build a separate index that is entity specific and driven based on data from the logs.

Cannot retrieve a document using GET API

I'm using Elasticsearch version 1.2.0
I have documents indexed by bulk indexing.
When it comes to search, it works fine when I use _search endpoint to get a document that I want.
However, I cannot get the exactly same document using GET API.
For example, the code snippet below does not retrieve any result.
curl -XGET "http://xxx.xxx.xxx.xxx:9200/my_index/my_type/my_id?pretty"
However, when I specify the routing value, it retrieves correct result that I wanted to get.
curl -XGET "http://xxx.xxx.xxx.xxx:9200/my_index/my_type/my_id?routing=3&pretty"
Here is the thing that I want to know because I've never used any kind of routing settings for indexing operation.
And there is NO parent-child relations with the "my_type".
Could anyone recommend other possible reasons for this kind of problem?
Thanks in advance.
Elasticsearch version 1.2.0 has a severe bug with respect to indexing.
The document recommends an upgrade to 1.2.1.I think you are running into this issue.

Resources