I wrote a Python program to parse an internal website to retrieve a number of metrics. The script spits out something like this:
12 456 785.3 .12 23145
Each value contains a unique performance metric. I'm now looking for ways to get this data into elasticsearch so I can start making use of it. There are several input plug-ins:
https://www.elastic.co/guide/en/logstash/current/input-plugins.html
And I'm curious which one folks are using to get script data into elasticsearch? Should I funnel it to syslog and grok it from there? Anyone using meetup? Any other solutions that I may be missing? I read numerous posts / websites and the solutions they proposed seem extremely complicated for this super simple job.
If this is a custom script for just this task, I'd change the script to generate JSON and use the Python Elastic SDK to insert the document in Elastic. Alternatively, your script could output JSON to stdout and you could have a simple bash script that does a curl to insert the document in Elasticsearch.
Related
I am starting to get acquainted with the use of ELK for work purposes, but struggle to find a solution to use simple mathematic requests in my database.
As shown on the picture, my DB contains 16 available fields, but I would like to create others, without doing it on Excel before converting my file in CVS again.
For example, I would like to create a variable #Bugs/Release. I've heard that this is quite easy to make with no need of scripting, but I can't find the way to do it... Has anybody the solution of this problem?
Huge thanksenter image description here
I'm processing a log file with the help of logstash aggregate filter with grok having multiple patterns.
Now while processing the logs I want to extract a part of the log with some regex and store it into a file.
For example, let's say my log is :
id:0422 time:[2013-11-19 02:34:58] level:INFO text:(Lorem Ipsum is simply dummy text of the printing and typesetting industry)
In this log the text will be different at every time.
I have a regex with help of it I can match a part of text that can occure in logstash
So if I find something in that text with help of that regex while logstash indexing into elastic I want to store it into some file or something
Is it possible to achieve this?
There are different solutions for this:
create a filter using ruby code that will be triggered to write in a specific format when you have all the event data together
create a separate output which will be triggered based on an if statement to a file, this will be the preferred way of working as it is clear that it is an output.
Depending on the fact if you want to send all data or not, or have it look different or not you might need to use the clone function in order to clone the event into two different ones which can be manipulated apart from each other using tags.
I have a field in my logs called json_path containing data like /nfs/abc/123/subdir/blah.json and I want to create count plot on part of the string abc here, so the third chunk using the token /. I have tried all sorts of online answers, but they're all partial answers (nothing I can easily understand how to use or integrate). I've tried running POST/GET queries in the Console, which all failed due to syntax errors I couldn't manage to debug (they were complaining about newline control chars, when there were none that I could obviously see or see in a text editor explicitly showing control-characters). I also tried Management -> Index Patterns -> Scripted Field but after adding my code there, basically the whole Kibana crashed (stopped working temporarily) until I removed that Scripted Field.
All this elasticsearch and kibana stuff is annoyingly difficult, all the docs expect you to be an expert in their tool, rather than just an engineer needing to visualize some data.
I don't really want to add a new data field in my log-generation code, because then all my old logs will be unsupported (which have the relevant data, it just needs that bit of string processing before data viz). I know I could probably back-annotate the old logs, but the whole Kibana/elasticsearch experience is just frustrating and I don't use it enough to justify learning such detailed procedures (I actually learned a bunch of this stuff a year ago, and then promptly forgot it due to lack of use).
You cannot plot on a sub string of a field unless you extract that sub string into a new field. I can understand the frustration in learning a new product but to be able to achieve what you want you need to have that sub string value in a new field. Scripted fields are generally used to modify a field. To be able to extract sub string from a field I’d recommend using Ingest Node processor like grok processor. This will add a new field which you can use to plot in Kibana visualizations..
I'm a beginner in shell scripting.
I'm trying to write a script in which a part of it involves reading the value from a webpage. In this case, The shell script tries to fetch the IMDB rating of a movie by going to the movie's IMDB page.
Can someone suggest me how i can achieve this & also what are the topics i need to learn ?
Thank you.
You can use wget and curl to get the page. Then you'll need to use regex or some other string manipulation to get the information you need from that. It would be a lot easier to use a library to do some of these things for you.
As the name suggests, I'm looking for some tool which will convert the existing data from hadoop sequence file to json format.
My initial googling have only shown up results related to jaql, which I'm desperately trying to get to work.
Is there any tool from Apache available for this very purpose?
NOTE:
I've hadoop sequence file sitting on my local machine and would like to get data in corresponding json format.
So in-effect, I'm looking for some tool/utility which will take hadoop sequence file as input and produce output in json format.
Thanks
Apache Hadoop might be a good tool for reading sequence files.
All kidding aside, though, why not write the simplest possible Mapper java program that uses, say, Jackson to serialize each key and value pair it sees? That would be a pretty easy program to write.
I thought there must be some tool which will do this given that its such common requirement. Yes, it should be pretty easy to code but again why to do so if you already have something which does just the same.
Anyway, I figured out to do it using jaql. Sample working query which worked for me,
read({type: 'hdfs', location: 'some_hdfs_file', inoptions: {converter: 'com.ibm.jaql.io.hadoop.converter.FromJsonTextConverter'}});