Elasticsearch path.logs is not working correctly - elasticsearch

When I set the path.logs in the elasticsearch.yml I get the behaviour, that some logs are in the defined folder, but some stuff is also always created in the elasticsearch root folder.
So in the elasticsearch root folder in logs I find the pid file gc stderr and stdout file...
When I remove the folder it´s always created on startup.
How can I prevent ES from splitting up in two folders?

The path.logs in elasticsearch.yml change the path for the elasticsearch logs only.
Logs related to the jvm like the gc logs are set in the jvm.options file and the PID file location is set when starting up elasticsearch using the option -p.
If you installed elasticearch using a package manager like yum or apt, you will need to edit the systemd elasticsearch.service and change the PID_DIR variable.
If you are starting elasticsearch using the command line you will need to pass the PID file location using the option -p, something like -p /path/to/elasticsearch.pid

Related

Running Kibana locally, why does`bin/kibana` command work, but other seemingly equal commands do not

This isn't stopping me from running kibana locally, I'd just like to better understand the mechanics for whatever script starts the service.
What I've noticed is that from my local Kibana (kibana-7.14.2-darwin-x86_64), there is a bin folder with a kibana Unix executable file in it. From this root directory, i can run bin/kibana to presumably execute the kibana file and start the service, but if I run cd bin, then kibana, I get command not found: kibana.
What am I missing here?
Thanks!

Filebeats filbeat.yml file location

I've installed Filebeats in my machine, and I was wondering in which location should the configuration file "filebeat.yml" should stay, once I've found 2 diretories for Elastic
C:\ProgramData\Elastic\Beats\filebeat
-> [I can find also filebeat yml examples here][1]
C:\Program Files\Elastic\Beats\8.1.2\filebeat
Can someone help ?
[1]: https://i.stack.imgur.com/8xqgU.png
The goal is to have a .yml file in a location that the filebeat program can access. Either one would work just fine. All you would do is point the running filebeat to the desired filebeat.yml file. For example, on Linux, if I create a new .yml file called example.yml, I would run it by doing ./filebeat -c /example.yml.
The same should be the case for Windows.

Having problems setting up Logstash

I've succesfully been able to set up Elasticsearch, Kibana etc and when I run: 'sudo systemctl status elasticsearch' it is all running fine.
However, when I execute 'sudo systemctl status logstash' this is the output:
It fails to start logstash, I've read numerous articles online saying it's something to do with path or config perhaps but I've had no luck finding a correct working solution.
I have JDK downloaded and followed the guide on the logstash documentation site so I'm unsure to as why logstash is not being allowed to run.
This is the output when I try to find out the logstash version.
The error message is
No configuration found in the configured sources
This means that you don't have any pipeline configuration in /etc/logstash/conf.d that Logstash can run, so it stops.
run logstash, logstash will read pipelines.yml to find your conf location
Logstash will find your .conf file from pipelines.yml. By default it will looking at /etc/logstash/conf.d/ as pipelines.yml show.
You have to move your configuration file to the path so logstash could find it.
or you want to run with specified file with it will ignore the pipeline.yml so logstash will directly go into your .conf
/usr/share/logstash/bin/logstash -f yourconf.conf
I will suggest you to do 1. but 2 is good for debugging your configuration file.

Restoring a Elasticsearch spapshot from Windows to Debian

I'm trying to restore a snapshot I created in a computer, but I need to restore it into another. I successfully created a repo and a first snapshot, I assume it because have results from these queries:
GET /_snapshot/_all
GET /_snapshot/my_repo_1/snapshot_1
My elasticsearch instance is configured to save the snapshoots in:
path.repo: D:\BACKUP
So I copied the full folder (with a dir called "my_repo_1" and another 5 files), and I moved it into another computer with a clean instance of elasticsearch (under other operating System). I configured the elasticsearch.yml file to match the location of the copied file:
path.repo: /home/BACKUP
And I restarted the service, but when running:
curl 'localhost:9200/_snapshot/_all'
I got an empty response.
Any idea why it isn't working'?

How do I force rebuild log's data in filebeat 5

I have filebeats 5.x ship logs to logstash.
How do I reset the “file pointer” in filebeat
This is a similar problem to
How to force Logstash to reparse a file?
https://discuss.elastic.co/t/how-do-i-reset-the-file-pointer-in-filebeats/49440
I cleaned all elasticsearch's data, delete the /var/lib/filebeat/registry. but filebeat is only shipping the new line.
change the registry_file is invalid, the file's offset saved to new file (delete file is the same problem)
filebeat.registry_file: registry
Stop filbeat service.
Rename the register file - usually found in /var/lib/filebeat/registry
Start filbeat service.
sudo service filbeat stop
mv /var/lib/filebeat/registry /var/lib/filebeat/registry.old
sudo service filbeat start
The Filebeat agent stores all of its state in the registry file. The location of the registry file should be set inside of your configuration file using the filebeat.registry_file configuration option.
I recommend specifying an absolute path in this option so that you know exactly where the file will be located. If you use a relative path then the value is interpreted relative to the ${path.data} directory. On Linux installations, when started as a service or started using the filebeat.sh wrapper, path.data is set to /var/lib/filebeat.
After deleting this registry file, Filebeat will begin reading all files from the beginning (unless you have configured a prospector with tail_files: true.
If you continue to have problems, I recommend looking at the Filebeat log file which will contain a line stating where the registry file is located. For example:
2017/01/18 18:51:31.418587 registrar.go:85: INFO Registry file set to: /var/lib/filebeat/registry
As already mentioned here, stopping the filebeat service, deleting the registry file(s) and restarting the service is correct.
I just wanted to add for Windows users, if you haven't specified a unique location for the filebeat.registry_file, it will likely default to ${path.data}/registry which is somewhat confusingly the C:\ProgramData\filebeat directory as mentioned by the folks at Elastic.
In my case I had to show hidden files before it was displayed.

Resources