How to show mulitple servers logs in kibana separatly from each other - elasticsearch

I have installed ELK on my Ubuntu server and install filebeat on remote server-A and server-B. I have configured Log-stash to receive data from filebeat and forward them to Elasticsearch. Both servers logs are showing in Kibana-->obeverability-->Logs.
The issue is both servers logs are got mixed and its hard to me find specific server log. If i add more than 3 or 4 server for logs monitoring so it would i be much hard to identify or search the specific server logs. Is there any way to configure each server log separately from each server in kibana so that i would be easy to find specific server log.
Experts looking forward from hearing you.

You can use filters in the search bar to look for separate hosts.
Use a query like > beat.hostname : abc and it will filter the log stream for just the hostname "abc"
Tip : You can also add this hostname as a column in the log stream so that you can differentiate which log is coming from which host without even applying the filter as mentioned above.
GOTO Logs>>settings and find log columns options.
Here you can add multiple fields to be shown in the log stream. Timestamp and message should be already there by default.
Add "beat.hostname" as a column.

Related

Difference between using Filebeat and Logstash to push log file to Elasticsearch

I am trying out the ELK to visualise my log file. I have tried different setups:
Logstash file input plugin https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html
Logstash Beats input plugin https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html with Filebeat Logstash output https://www.elastic.co/guide/en/beats/filebeat/current/logstash-output.html
Filebeat Elasticsearch output https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html
Can someone list out their differences and when to use which setup? If it is not for here, please point me to the right place like Super User or DevOp or Server Fault.
1) To use logstash file input you need a logstash instance running on the machine from where you want to collect the logs, if the logs are on the same machine that you are already running logstash this is not a problem, but if the logs are on remote machines, a logstash instance is not always recommended because it needs more resources than filebeat.
2 and 3) For collecting logs on remote machines filebeat is recommended since it needs less resources than a logstash instance, you would use the logstash output if you want to parse your logs, add or remove fields or make some enrichment on your data, if you don't need to do anything like that you can use the elasticsearch output and send the data directly to elasticsearch.
This is the main difference, if your logs are on the same machine that you are running logstash, you can use the file input, if you need to collect logs from remote machines, you can use filebeat and send it to logstash if you want to make transformations on your data, or send directly to elasticsearch if you don't need to make transformations on your data.
Another advantage of using filebeat, even on the logstash machine, is that if your logstash instance is down, you won't lose any logs, filebeat will resend the events, using the file input you can lose events in some cases.
An additional point for large scale application is that if you have a lot of Beat (FileBeat, HeartBeat, MetricBeat...) instances, you would not want them altogether open connection and sending data directly to Elasticsearch instance at the same time.
Having too many concurrent indexing connections may result in a high bulk queue, bad responsiveness and timeouts. And for that reason in most cases, the common setup is to have Logstash placed between Beat instances and Elasticsearch to control the indexing.
And for larger scale system, the common setup is having a buffering message queue (Apache Kafka, Rabbit MQ or Redis) between Beats and Logstash for resilency to avoid congestion on Logstash during event spikes.
Figures are captured from Logz.io. They also have a good
article on this topic.
Not really familiar with (2).
But,
Logstash(1) is usually a good choice to take a content play around with it using input/output filters, match it to your analyzers, then send it to Elasticsearch.
Ex.
You point the Logstash to your MySql which takes a row modify the data (maybe do some math on it, then Concat some and cut out some words then send it to ElasticSearch as processed data).
As for Logbeat(2), it's a perfect choice to pick up an already processed data and pass it to elasticsearch.
Logstash (as the name clearly states) is mostly good for log files and stuff like that. usually you can do tiny changes to those.
Ex. I have some log files in my servers (incl errors, syslogs, process logs..)
Logstash listens to those files, automatically picks up new lines added to it and sends those to Elasticsearch.
Then you can filter some things in elasticsearch and find what's important to you.
p.s: logstash has a really good way of load balancing too many data to ES.
You can now use filebeat to send logs to elasticsearch directly or logstash (without a logstash agent, but still need a logstash server of course).
Main advantage is that logstash will allow you to custom parse each line of the logs...whereas filebeat alone will simply send the log and there is not much separation of fields.
Elasticsearch will still index and store the data.

Simple way to analysis log file and display the result

I have a log file. I want to upload the log file and I want to do some query on the log file then I want to display the result. What is the simplest way to do this work? Is it possible to do with only Elastic search and Kibana without using logstash?
You need something to send the log file to elasticsearch, it could be a Logstash instance, Filebeat or a custom application/script.
With Logstash you will have more freedom if you want to parse the message or enrich the data, Filebeat is more limited on these aspects.
On my case I use both Logstash and Filebeat, I have a Logstash machine that receives the data sent by Filebeat agents installed on remote machines.
You need a log forwarder to read your log file and push them into elasticsearch cluster.
Famous log forwarders are,
logstash (which not only read log file and push log lines into elasticseach, but also you can perform intermediate filtering and formatting before you send each log line)
file-beat (a very light weight agent which read log lines from file and push into elasticsearch cannot perform intermediate filtering or formatting)

Is it possible to configure multiple output for a filebeat?

In one of our applications we parse the application logs using logstash and indexing them into elasticsearch. Our simple architecture is logfiles ---> filebeat--->logstash-----> elasticsearch.
As we enabled multiple log files example (apachelogs, passengerlogs, application logs etc,,), logstash is not able to parse the volume of data and hence there are logs missing at elasticsearch. Is there any way to handle huge volume of data at logstash or can we have multiple logstash server to receive logs from filebeat based on the log type? for example: application logs send output logstash-1 and apachelogs to logstash-2.
Thanks in advance.
It is not currently possible to define the same output type multiple time in Filebeat.
But there is a few options to achieve what you want:
You can use the loadbalance option in filebeat to distribute your events to multiple Logstash. https://www.elastic.co/guide/en/beats/filebeat/current/logstash-output.html#loadbalance, by default beats will pick a random host and stick to it.
Use a queue, like kafka and make logstash uses the kafka input, this will allow you add more LS as you need.

How to watch the logstash log?

For my enterprise application distributed and structured logging, I use logstash for log aggregation and elastic search as log storage. I have the clear control pushing logs from my application to logstash. On the other hand, from logstash to elastic search having very thin control.
Assume, if my elasticsearch goes down for some stupid reason, The logstash log(/var/log/logstash/logstash.log) is recording the reason clearly like the following one.
Attempted to send a bulk request to Elasticsearch configured at '["http://localhost:9200/"]', but Elasticsearch appears to be unreachable or down! {:client_config=>{:hosts=>["http://localhost:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :error_message=>"Connection refused", :class=>"Manticore::SocketException", :level=>:error}
How will I get noticed OR notified for the error level logs from logstash?
Should be doable with the following 3 steps:
1) Depends on how you want to get notified. If an email is sufficient you could use the Logstash email output-plugin.
But there are many more output plugins available.
2) To restrict certain events you can do stuff like that in your Logstash config (example is taken from the Elastic support site):
if [level] == "ERROR" {
output {
...
}
}
The if clause is not limited to the level field of your JSON; you are able to apply it for any of your JSON fields of course, which makes it more powerful.
3) To make this work (and not run into a logging cycle) you need either:
Start a second Logstash instance on your system (just observing the Logstash ERROR log), which should be okay from what is written here
Or you build a more complicated configuration, using just one Logstash instance. This configuration has to forward log-statements from YOUR application to Elasitcsearch while logstaments from Logstash ERROR logs are forwarded to the e.g. Logstash email output-plugin.
Side note: you may want to have a look at Filebeat which works very well with Logstash (Its from Elastic as well) and it is even more light-weighted than Logstash. It allows stuff like include_lines: ["^ERR", "^WARN"] in your configuration.
To receive input from Filebeat you will have to adopt the config to send data to Logstash and for Logstash you will have to active and use the Beats input plugin described here.

Logstash output to server with elasticsearch

I intend to run logstash on multiple clients, which in turn would submit their logstash reports to the elastic search on a server(a Ubuntu machine, say).
Thus there are several clients running logstash outputting their logs to the elastic search on a COMMON server.
Is this o/p redirection to a server possible with Logstash on the various clients?
If yes, what would the configuration file be?
You need a "broker" to collect the outputs from each of the servers.
Here's a good tutorial:
http://logstash.net/docs/1.1.11/tutorials/getting-started-centralized

Resources