where the log files stored - filebeat and logstash - elasticsearch

I have installed the ELK with Filebeat.
I followed this blog for setup : https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-ubuntu-14-04#set-up-filebeat(add-client-servers)
When I tested with:
curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
I got the result as the blog.
There are two question from my side:
After the logstash host got the log info, where it stored?
If I want to use the filebeat to forward the whole log file to the logstash host and stored the location which I desired, how can I config it?

Logstash sends its output where you configure it to. In the blog posting you reference, the file 30-elasticsearch-output.conf contains an output{} section, which directs the output to elasticsearch. There are lots of other possible outputs.

Related

Filebeats filbeat.yml file location

I've installed Filebeats in my machine, and I was wondering in which location should the configuration file "filebeat.yml" should stay, once I've found 2 diretories for Elastic
C:\ProgramData\Elastic\Beats\filebeat
-> [I can find also filebeat yml examples here][1]
C:\Program Files\Elastic\Beats\8.1.2\filebeat
Can someone help ?
[1]: https://i.stack.imgur.com/8xqgU.png
The goal is to have a .yml file in a location that the filebeat program can access. Either one would work just fine. All you would do is point the running filebeat to the desired filebeat.yml file. For example, on Linux, if I create a new .yml file called example.yml, I would run it by doing ./filebeat -c /example.yml.
The same should be the case for Windows.

Having problems setting up Logstash

I've succesfully been able to set up Elasticsearch, Kibana etc and when I run: 'sudo systemctl status elasticsearch' it is all running fine.
However, when I execute 'sudo systemctl status logstash' this is the output:
It fails to start logstash, I've read numerous articles online saying it's something to do with path or config perhaps but I've had no luck finding a correct working solution.
I have JDK downloaded and followed the guide on the logstash documentation site so I'm unsure to as why logstash is not being allowed to run.
This is the output when I try to find out the logstash version.
The error message is
No configuration found in the configured sources
This means that you don't have any pipeline configuration in /etc/logstash/conf.d that Logstash can run, so it stops.
run logstash, logstash will read pipelines.yml to find your conf location
Logstash will find your .conf file from pipelines.yml. By default it will looking at /etc/logstash/conf.d/ as pipelines.yml show.
You have to move your configuration file to the path so logstash could find it.
or you want to run with specified file with it will ignore the pipeline.yml so logstash will directly go into your .conf
/usr/share/logstash/bin/logstash -f yourconf.conf
I will suggest you to do 1. but 2 is good for debugging your configuration file.

How to prevent old log appending from filebeat to logstash?

I am using filebeat to get logs from the remote server and shipping it to logstash so it's working fine.
But when new logs being appending in the source log file so filebeat reads those logs from the beginning and send it to logstash and logstash appending all logs with older logs in the elasticsearch even though we already have those older logs in elasticsearch so here repetition is happening of logs.
So my question is how to ship only new added log into logstash. Once new lines of log append in log file so those new files should ship to logstash to elasticsearch.
Here is my filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/user/Documents/ELK/locals/*.log
logstash input is logstash-input.conf
input {
beats {
port => 5044
}
}
I assume you're doing the same mistake I did for testing the filebeat a few months back (using vi editor for manually updating the log/text file). When you edit a file manually, using vi editor, it creates a new file on disk with new meta-data. Filebeat identifies the state of the file using its meta-data, not the text. Hence, reloads the complete log file.
If this is the case, try to append it to the file like this.
echo "something" >> /path/to/file.txt
For more : Read this
There seem to be a problem somewhere. Usually Filebeat is intelligent enough to save the offset, meaning that it only ships log-lines which have been added since the last crawl.
There are a few settings which could possibly interfere with that, please read up on them:
- ignore_older
- close_inactive (https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-log.html#filebeat-input-log-close-inactive)
- close_timeout (https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-log.html#filebeat-input-log-close-timeout)
Please post your complete filebeat.yml file and also which system and which type of logs you are trying to harvest.

How do I force rebuild log's data in filebeat 5

I have filebeats 5.x ship logs to logstash.
How do I reset the “file pointer” in filebeat
This is a similar problem to
How to force Logstash to reparse a file?
https://discuss.elastic.co/t/how-do-i-reset-the-file-pointer-in-filebeats/49440
I cleaned all elasticsearch's data, delete the /var/lib/filebeat/registry. but filebeat is only shipping the new line.
change the registry_file is invalid, the file's offset saved to new file (delete file is the same problem)
filebeat.registry_file: registry
Stop filbeat service.
Rename the register file - usually found in /var/lib/filebeat/registry
Start filbeat service.
sudo service filbeat stop
mv /var/lib/filebeat/registry /var/lib/filebeat/registry.old
sudo service filbeat start
The Filebeat agent stores all of its state in the registry file. The location of the registry file should be set inside of your configuration file using the filebeat.registry_file configuration option.
I recommend specifying an absolute path in this option so that you know exactly where the file will be located. If you use a relative path then the value is interpreted relative to the ${path.data} directory. On Linux installations, when started as a service or started using the filebeat.sh wrapper, path.data is set to /var/lib/filebeat.
After deleting this registry file, Filebeat will begin reading all files from the beginning (unless you have configured a prospector with tail_files: true.
If you continue to have problems, I recommend looking at the Filebeat log file which will contain a line stating where the registry file is located. For example:
2017/01/18 18:51:31.418587 registrar.go:85: INFO Registry file set to: /var/lib/filebeat/registry
As already mentioned here, stopping the filebeat service, deleting the registry file(s) and restarting the service is correct.
I just wanted to add for Windows users, if you haven't specified a unique location for the filebeat.registry_file, it will likely default to ${path.data}/registry which is somewhat confusingly the C:\ProgramData\filebeat directory as mentioned by the folks at Elastic.
In my case I had to show hidden files before it was displayed.

ElasticSearch installed---but Installing kibana on localhost?

I'd like to view my machine's syslogs more beautifully on an ubuntu desktop. I notice that all the kibana documentation is oriented towards remote servers (which makes sense). However, how would I securely view the same information about my local machine?
Here are some things I've read that were not helpful because they were designed for remote access:
https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-logs-on-centos-7
Kibana deployment issue on server . . . client not able to access GUI
http://www.elasticsearch.org/overview/kibana/installation/ which has the following problems:
there is no config.js to open in an editor per step 2, you can see this very plainly on their github page: https://github.com/elasticsearch/kibana
running
~/kibana/src/server/bin$ bash kibana.sh
The Kibana Backend is starting up... be patient
Error: Unable to access jarfile ./../lib/kibana.jar
How do I install kibana locally?
Not sure if you're still looking for an answer, but for future searchers:
What you can do is download elasticsearch - http://www.elasticsearch.org/overview/elkdownloads/
Extract it, and create a plugins subdirectory. Then, within the /plugins directory create a /kibana/_site subdirectory.
Then, download kibana using the above mentioned link. Extract the archive, then edit config.js to point to the localhost as the elasticsearch host:
elasticsearch: "http://localhost:9200",
Copy all of the contents of the folder you extracted kibana into to the /kibana/_site directory you created inside the elasticsearch folder.
Then start elasticsearch:
within the elasticsearch directory -
bin/elasticsearch
Kibana will now run off of the same 'server' as elasticsearch, on your local host.
UPDATE: Kibana 4 comes bundled with a web server now: see the docs

Resources