Ubuntu-based Logstash Keystore Permissions Issues - elasticsearch

Background: I'm working in an Ubuntu 20.04 environment setting up Logstash servers to ship metrics to my Elastic cluster. With my relatively basic configuration, I'm able to have a Filebeat process send logs to a Loadbalancer, which then spreads them across my Logstash servers and up to Elastic. This process works. I'd like to be able to use the Logstash Keystore to prevent having to pass sensitive variables to my logstash.yml file in plain text. In my environment, I'm able to follow the Elastic documentation to setup a password-protected keystore in the default location, add keys to it, and successfully list out those keys.
Problems: While the Logstash servers successfully run without the keystore, the moment I add them and try to watch the logfile on startup, the process never starts. It seems to continue attempting restart without ever logging to the logstash-plain.log. When trying to run the process in the foreground with this configuration, the error I received was the rather-unhelpful:
Found a file at /etc/logstash/logstash.keystore,
but it is not a valid Logstash keystore
Troubleshooting Done: After trying some steps found in other issues, such as replacing the /etc/sysconfig/logstash creation with simply adding the password to /etc/default/logstash, the errors were a little more helpful, stating that the file permissions or password were incorrect. The logstash-keystore process itself was capable of creating and listing keys, so the password was correct, and the keystore itself was set to 0644. I tried multiple permissions configurations and was still unable to get Logstash to run as a process or in the foreground.
I'm still under the impression it's a permissions issue, but I don't know how to resolve it. Logstash runs as the logstash user, which should be able to read the keystore file since its 0644 and housed in the same dir as logstash.yml.
Has anyone experienced something similar with Logstash & Ubuntu, or in a similar environment? If so, how did you manage to get past it? I'm open to ideas and would love to get this working.

Try running logstash-keystore as the logstash user:
sudo -u logstash /usr/share/logstash/bin/logstash-keystore \
--path.settings /etc/logstash list
[Aside from the usual caveats about secret obfuscation of this kind, it's worth making explicit that the docs expect logstash-keystore to be run as root, not as logstash. So after you're done troubleshooting, especially if you create a keystore owned by logstash, make sure it ultimately has permissions that are sufficiently restrictive]
Alternatively, you could run some other command as the logstash user. To validate the permission hypothesis, you just need to read the file as user logstash:
sudo -u logstash file /etc/logstash/logstash.keystore
sudo -u logstash md5sum /etc/logstash/logstash.keystore
su logstash -c 'cat /etc/logstash/logstash.keystore > /dev/null'
# and so on
If, as you suspect, there is a permissions problem, and the read test fails, assemble the necessary data with these commands:
ls -dla /etc/logstash/{,logstash.keystore}
groups logstash
By this point you should know:
what groups logstash is in
what groups are able to open /etc/logstash
what groups are able to read /etc/logstash/logstash.keystore
And you already said the keystore's mode is 644. In all likelihood, logstash will be a member of the logstash group only, and /etc/logstash will be world readable. So the TL;DR version of this advice might be:
# set group on the keystore to `logstash`
chgrp logstash /etc/logstash/logstash.keystore
# ensure the keystore is group readable
chmod g+r /etc/logstash/logstash.keystore
If it wasn't permissions, you could try recreating the store without a password. If it then works, you'll want to be really careful about how you handle the password environment variable, and go over the docs with a fine-tooth comb.

Related

Unable to output container password details when using ansible with podman

when using ansible-podman, I am unable to output the std-out of the container run command as I might do when using the command line. This means that I don't get to see the automatically generated password and keystore password, along with other details.
Even when using the tty parameter of the ansible-podman-container, the logs report:
Auto-configuration will not generate a password for the elastic built-in superuser, as we cannot determine if there is a terminal attached to the elasticsearch process. You can use the bin/elasticsearch-reset-password tool to set the password for the elastic user."
There is no elastic user created, and when I exec into the container, the bin/elasticsearch-reset-password tool fails with:
ROR: Failed to reset password for the [elasticsearch] user
As https is standard on the 8.5 image, I am unable to use it, as I cannot set up auth properly. Also, I cannot use apt to install an editor, as the user elasticsearch does not have sufficient permissions.
If you think this is a podman error then please let me know, and I will hassle the devs, and see if I can't get better output and tty detection etc.
An alternative I have tried is using ansible to run a shell command, but the output is no different.
What I really want is to be able to obtain the password to output to an ansible variable so that I can spin up a pod of containers, including elasticsearch, for running tests.
Alternatively, I can use elasticsearch 7.17.7 with http, but I am going to need encryption for production, and there doesn't seem to be a way to do it with ansible.
Perhaps there is an environment variable that I am missing that I could set to create the password? I have tried setting ELASTIC_PASSWORD, but it is of no help.
I am connecting from django using django-elasticsearch-dsl, and get the following error, when verify_cert is set to false:
AuthenticationException(401, 'security_exception', 'missing authentication credentials for REST request [/forum_posts_index/_search]')
Any help gratefully received...

Path settings configuration for Logstash as a Service

I want to process my logs from my db to Kibana via Logstash. Presently I am able to manually update the logs by calling the command: sudo /usr/share/logstash/bin/logstash -f /data/Logstash_Config/Logstash_Config_File_for_TestReport.conf --pipeline.workers 1 --path.settings "/etc/logstash"
Now, I want to automate the process by using Logstash as a Service. I understand that by placing the path.settings parameter in either the config file or other corresponding file should solve the issue, but I am not able to process further.

ERROR: Failed to determine the health of the cluster

I am running Elasticsearch and kibana, I am not sure of the status of my elasticsearsh cluster (if its red, yellow, or green) but it seems I need to get a token generated by elasticsearch as in the screenshot when I ran bin/elasticsearch-create-enrollment-token --scope kibana from the right directory it errors out ERROR: Failed to determine the health of the cluster..
According Ioannis Kakavas in discuss.elastic, "CLI tools extending BaseRunAsSuperuserCommand should only connect to the local node". When I run in a local node, it works. But when I run in the elasticsearch container in a cluster, it doesn't work. The solution was execute the elastic-search-reset-password and elasticsearch-create-enrollment-token scripts, respectively, like this (inside the elasticsearch container):
/usr/share/elasticsearch/bin/elasticsearch-reset-password -i -u elastic --url https://localhost:9200
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana --url https://localhost:9200
I encountered the same problem, and I just redid the process - unzipped the ES and kibana zip files again, and ran bin/elasticsearch in the newly created directory. Look for a message that is encapsulated in a formatted box that contains both the password for the elastic user, and the enrollment token for Kibana (the token is only valid for 30 minutes). This message will only appear once, the first time you run elasticsearch.
I proceeded to run bin/kibana for Kibana and configured it in the browser, and everything worked out from there. Hope this helps!
I have the exact issue:
$ sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
ERROR: Failed to determine the health of the cluster.
But after I restart the elasticsearch service:
$ sudo systemctl restart elasticsearch.service
then it works:
This tool will reset the password of the [elastic] user to an autogenerated value.
The password will be printed in the console.
Please confirm that you would like to continue [y/N]y
Password for the [elastic] user successfully reset.
New value: xxxxxx
Two possible solutions:
Make sure that you have enough disk space.
Your VPN might be causing the issue.
The enrollment Token will be present in the terminal itself. You just need to scroll up till you find it when you are installing.
The reason for the error - ERROR: Failed to determine the health of the cluster is due to the fact that Elastic has not been installed yet and running that command is like calling a function without defining it.

Filebeat : data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path

Looking at the logs in one of the filebeat pods i can see this:
2021-01-04T10:10:52.754Z DEBUG [add_cloud_metadata] add_cloud_metadata/providers.go:129 add_cloud_metadata: fetchMetadata ran for 2.351101ms
2021-01-04T10:10:52.754Z INFO [add_cloud_metadata] add_cloud_metadata/add_cloud_metadata.go:93 add_cloud_metadata: hosting provider type detected as openstack, metadata={"ava
ilability_zone":"us-east-1c","instance":{"id":"i-08f536567bd9945df","name":"ip-10-101-2-178.ec2.internal"},"machine":{"type":"m5.2xlarge"},"provider":"openstack"}
2021-01-04T10:10:52.755Z DEBUG [processors] processors/processor.go:120 Generated new processors: add_cloud_metadata={"availability_zone":"us-east-1c","instance":{"id":"i-08f5
36567bd9945df","name":"ip-10-101-2-178.ec2.internal"},"machine":{"type":"m5.2xlarge"},"provider":"openstack"}, add_docker_metadata=[match_fields=[] match_pids=[process.pid, process.ppid]]
2021-01-04T10:10:52.755Z INFO instance/beat.go:392 filebeat stopped.
2021-01-04T10:10:52.755Z ERROR instance/beat.go:956 Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (pat
h.data).
Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
as you can see the filebeat stopped with an error :
data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
After searching the problem in github/forum i found this :
https://discuss.elastic.co/t/data-path-already-locked-by-another-beat/219852/4
Which looks like my problem,
Im using the default filebeat-kubernetes.yaml , and there is no information in ELK / Filebeats docs on how to add unique paths in the filebeat-kubernetes.yaml
where do i add them and how do i make them unique?
Thanks
I had the same problem. It means that your data path (/var/lib/filebeats) are locked by another filebeat instance. So execute sudo systemctl stop filebeat (in my case) to ensure that you don't have running filebeat
and then run filebeat with sudo filebeat -e which prints logs in console
I also tried link, that you shared, but it didn't help me. Here another solutions, may be it would help you: https://discuss.elastic.co/t/data-path-already-locked-by-another-beat/219852/2
In addition to #Anton's answer, In one of the scenarios, I had a lock file in the data path. This could be /var/lib/filebeat/filebeat.lock depending on the configuration. Delete the file and run sudo filebeat -e
If you want to run Elastic stack as a service, the solution is just to restart all of the stack in this order:
Elasticsearch
Kibana
Logstash
Filebeat(s)
which is already suggested in this link.

How to spin up spinnaker locally for the first time

How to spin up a local version of Spinnaker? This has been answered and addressed in detail here.
https://github.com/spinnaker/spinnaker/issues/1729
Ok, so I got it to work, but not without you valuable help! #lwander
So I'll leave the steps here for posterity.
Each line is a separate command in the command line, I've installed this on a virtual machine with a freshly installed Ubuntu 14.04 copy with nothing else than SSH. Then SSH as root, You will need to configure sshd on your console to allow root access.
https://askubuntu.com/questions/469143/how-to-enable-ssh-root-access-on-ubuntu-14-04
> curl -O https://raw.githubusercontent.com/spinnaker/halyard/master/install/stable/InstallHalyard.sh
created a user account member of the adm and sudo groups (is this necessary???)
then Install Halyard:
bash InstallHalyard.sh
Verify that HAL is installed and validate its version.
hal -v
Tell Hal that the deployment type will be as a local instance (this will publish all services in localhost which will be tricky later in order to access them, but I have a turnaround so keep reading)
hal config deploy edit --type localdebian
Hal will complain that a version has not been selected, just tell HAL which version:
hal config version edit --version 1.0.0
The tell HAL which storage you are going to use, in my case and since it is local I want to use redis.
hal config storage edit --type redis
So now we need to add a cloud provider to HAL, we use AWS so we add it like this:
hal config provider aws edit --access-key-idXXXXXXXXXXXXXXXXXXXX--secret-access-key
I created a user on AWS and added access keys to the user inside IAM on the user security credentials tab. Obviously my access-key-idis not XXXXXXXXXXXXXXXXXXXX, I edited it. You do not need to enter the secret-access-key because the command will prompt for it.
Then you need to create a username relative or that will only concern you spinnaker installation however this will get related to you AWS Account-ID, so in MY spinnaker local installation I chose the username spinnakermaster you should choose yours!. And my AWS Account ID is not YYYYYYYYYYYY, I've edited too.
All the configurations and steps that you'll need to do inside AWS for this to work are really well documented here:
[https://www.spinnaker.io/setup/providers/aws/](https://www.spinnaker.io/setup/providers/aws/
)
And to tell HAL of of the above here's the command:
hal config provider aws account add spinnakermaster --account-id YYYYYYYYYYYY --assume-role role/spinnakerManaged
And after all that and if everything went according to plan we can ask HAL to deploy our brand new spinnaker installation.
hal deploy apply
It will begin a long installation downloading and configuring all the services.
Once it has finished you may do whatever you like but in my case I created a monitoring script like the one described here:
https://github.com/spinnaker/spinnaker/issues/854
Which can be launched on a recursive manner as this:
watch -n1 spinnaker-status.shor until toctrl+Cit!.
then to be able to access your local VM spinnaker copy you can either setup a reverse proxy with the proxy server of your choice to forward all the requests to localhost or you can simply ssh the SH** out of this redirecting the ports;
ssh root#ZZZ.ZZZ.ZZZ.ZZZ -L 9000:127.0.0.1:9000 -L 8084:127.0.0.1:8084 -L 8083:127.0.0.1:8083 -L 7002:127.0.0.1:7002 -L 8087:127.0.0.1:8087 -L 8080:127.0.0.1:8080 -L 8088:127.0.0.1:8088 -L 8089:127.0.0.1:8089
Where obviously theZZZ.ZZZ.ZZZ.ZZZ is not an actual IP Address.
And finally to begin having fun with this cutie you have to go to your browser of choice and type into the address bar:
http://127.0.0.0:9000
Hope this helps and saves some time to everybody!.
Cheers.
EN

Resources