Access k8s pod logs generated from ssh exec - elasticsearch

I have a filebeat configured to send my k8s cluster logs to Elasticsearch.
When I connect to the pod directly (kubectl exec -it <pod> -- sh -c bash),
the generated output logs aren't being sent to the destination.
Digging at k8s docs, I couldn't find how k8s is handling STDOUT from a running shell.
How can I configure k8s to send live shell logs?

Kubernetes has (mostly) nothing to do with this, as logging is handled by the container environment used to support Kubernetes, which is usually docker.
Depending on docker version, logs of containers could be written on json-file, journald or more, with the default being a json file. You can do a docker info | grep -i logging to check what is the Logging Driver used by docker. If the result is json-file, logs are being written down on a file in json format. If there's another value, logs are being handled in another way (and as there are various logging drivers, I suggest to check the documentation about them)
If the logs are being written on file, chances are that by using docker inspect container-id | grep -i logpath, you'll be able to see the path on the node.
Filebeat simply harvest the logs from those files and it's docker who handles the redirection between the application STDOUT inside the container and one of those files, with its driver.
Regarding exec commands not being in logs, this is an open proposal ( https://github.com/moby/moby/issues/8662 ) as not everything is redirected, just logs of the apps started by the entrypoint itself.
There's a suggested workaround which is ( https://github.com/moby/moby/issues/8662#issuecomment-277396232 )
In the mean time you can try this little hack....
echo hello > /proc/1/fd/1
Redirect your output into PID 1's (the docker container) file
descriptor for STDOUT
Which works just fine but has the problem of requiring a manual redirect.

Use the following process:
Make changes in your application to push logs to STDOUT. You may configure this in your logging configuration file.
Configure file to read those STDOUT logs (which eventual is some docker log file location like /var/log etc)
Start your file as a DeamonSets, so that logs from new pods and nodes can be anatomically pushed to ES.
For better readability of logs, make sure you push logs in json format.

Related

Path settings configuration for Logstash as a Service

I want to process my logs from my db to Kibana via Logstash. Presently I am able to manually update the logs by calling the command: sudo /usr/share/logstash/bin/logstash -f /data/Logstash_Config/Logstash_Config_File_for_TestReport.conf --pipeline.workers 1 --path.settings "/etc/logstash"
Now, I want to automate the process by using Logstash as a Service. I understand that by placing the path.settings parameter in either the config file or other corresponding file should solve the issue, but I am not able to process further.

What to do when Memgraph stops working without any info?

Sometimes the Docker container where Memgraph is running just stops working or says that the process was aborted with exit code 137. How can I fix this?
You should check the Memgraph logs, where you'll probably find the reason why the process was aborted.
Since you said that you're using Memgraph with Docker, there are two options:
If you run Memgraph with Docker using the volume for logs, that is with
-v mg_log:/var/log/memgraph, then mg_log folder usually can be found at \\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\ (Windows) or /var/lib/docker/volumes/ (Linux and macOS).
If you run Memgraph without using the volume for logs, then you need to enter the Docker container. In order to do that, first you have to find out the container ID by running docker ps. Then you have to copy the container ID and run docker exec -it <containerID> bash. For example, if container ID is 83d76fe4df5a, then you run docker exec -it 83d76fe4df5a bash. Next, you need find the folder where logs are located. You can do that by running cd /var/log/memgraph. To read the logs, run cat <memgraph_date>.log, that is, if you have log file memgraph_2022-03-02.log located inside the log folder, then run cat memgraph_2022-03-02.log.
Hopefully, when you read the logs, you'll be able to fix your problem.

Kubernetes logs not found in default locations?

In my k8s environment where spring-boot applications runs, I checked log location in /var/log and /var/lib but both empty. Then I found log location in /tmp/spring.log . It seems this the default log location. My problem are
How kubectl log knows it should read logs from /tmp location. I get log output on kubectl logs command.
I have fluent-bit configured where it has input as
following
[INPUT]
Name tail
Tag kube.dev.*
Path /var/log/containers/*dev*.log
DB /var/log/flb_kube_dev.db
This suggest it should reads logs from /var/log/containers/ but it does not have logs. However i am getting fluent-bit logs successfully. What am i missing here ?
Docker logs only contain the logs that are dumped on STDOUT by your container's process with PID 1 (your container's entrypoint or cmd process).
If you want to see the logs via kubectl logs or docker logs, you should redirect your application logs to STDOUT instead of file /tmp/spring.log. Here's an excellent example of how this can achieved with minimal effort.
Alternatively, you can also use hostPath volumeMount. This way, you can directly access the log from the path on the host.
Warning when using hostPath volumeMount
If the pod is shifted to another host due to some reason, you logs will not move along with it. A new log file will be created on this new host at the same path.
If you are searching for the actual location of the logs outside the containers (and on the host nodes of the cluster), this depends on a couple things. I suppose you are using Docker to run your containers under Kubernetes, which is the most common setup.
On each node of your Kubernetes cluster, you can use the following command to check what is the logging driver being currently used:
docker info | grep -i logging
The default value should be json-file, which means that logs are being written as jsons from the containers, to a certain location on your host nodes.
If you find another driver, such as for example journald, then that means Docker logging driver is sending logs directly to the systemd journal. There are many logging drivers, so as a first check, you should be sure that all yours Kubernetes nodes are configured to log as json files (or, in the way you need to harvest them).
Once this is done, you can start checking where your containers are logging their own log. Choose a Pod to analyze, then:
Identify on which Kubernetes node it is running on
kubectl get pod pod-name -owide
Grab the container ID with something like the following
kubectl get pod pod-name -ojsonpath='{.status.containerStatuses[0].containerID}'
Where the id should be something in the shape of docker://f834508490bd2b248a2bbc1efc4c395d0b8086aac4b6ff03b3cc8fd16d10ce2c
Remove the docker:// part and SSH on the Kubernetes node on which this container is running, then do a
docker inspect container-id | grep -i logpath
Which should give you the log locations for that particular container. You can try tail on the file to check if the logs are really there or not.
In my case, the container I tried this procedure on, was logging inside:
/var/lib/docker/containers/289271086d977dc4e2e0b80cc28a7a6aca32c888b7ea5e1b5f24b28f7601ff63/289271086d977dc4e2e0b80cc28a7a6aca32c888b7ea5e1b5f24b28f7601ff63-json.log

Ubuntu-based Logstash Keystore Permissions Issues

Background: I'm working in an Ubuntu 20.04 environment setting up Logstash servers to ship metrics to my Elastic cluster. With my relatively basic configuration, I'm able to have a Filebeat process send logs to a Loadbalancer, which then spreads them across my Logstash servers and up to Elastic. This process works. I'd like to be able to use the Logstash Keystore to prevent having to pass sensitive variables to my logstash.yml file in plain text. In my environment, I'm able to follow the Elastic documentation to setup a password-protected keystore in the default location, add keys to it, and successfully list out those keys.
Problems: While the Logstash servers successfully run without the keystore, the moment I add them and try to watch the logfile on startup, the process never starts. It seems to continue attempting restart without ever logging to the logstash-plain.log. When trying to run the process in the foreground with this configuration, the error I received was the rather-unhelpful:
Found a file at /etc/logstash/logstash.keystore,
but it is not a valid Logstash keystore
Troubleshooting Done: After trying some steps found in other issues, such as replacing the /etc/sysconfig/logstash creation with simply adding the password to /etc/default/logstash, the errors were a little more helpful, stating that the file permissions or password were incorrect. The logstash-keystore process itself was capable of creating and listing keys, so the password was correct, and the keystore itself was set to 0644. I tried multiple permissions configurations and was still unable to get Logstash to run as a process or in the foreground.
I'm still under the impression it's a permissions issue, but I don't know how to resolve it. Logstash runs as the logstash user, which should be able to read the keystore file since its 0644 and housed in the same dir as logstash.yml.
Has anyone experienced something similar with Logstash & Ubuntu, or in a similar environment? If so, how did you manage to get past it? I'm open to ideas and would love to get this working.
Try running logstash-keystore as the logstash user:
sudo -u logstash /usr/share/logstash/bin/logstash-keystore \
--path.settings /etc/logstash list
[Aside from the usual caveats about secret obfuscation of this kind, it's worth making explicit that the docs expect logstash-keystore to be run as root, not as logstash. So after you're done troubleshooting, especially if you create a keystore owned by logstash, make sure it ultimately has permissions that are sufficiently restrictive]
Alternatively, you could run some other command as the logstash user. To validate the permission hypothesis, you just need to read the file as user logstash:
sudo -u logstash file /etc/logstash/logstash.keystore
sudo -u logstash md5sum /etc/logstash/logstash.keystore
su logstash -c 'cat /etc/logstash/logstash.keystore > /dev/null'
# and so on
If, as you suspect, there is a permissions problem, and the read test fails, assemble the necessary data with these commands:
ls -dla /etc/logstash/{,logstash.keystore}
groups logstash
By this point you should know:
what groups logstash is in
what groups are able to open /etc/logstash
what groups are able to read /etc/logstash/logstash.keystore
And you already said the keystore's mode is 644. In all likelihood, logstash will be a member of the logstash group only, and /etc/logstash will be world readable. So the TL;DR version of this advice might be:
# set group on the keystore to `logstash`
chgrp logstash /etc/logstash/logstash.keystore
# ensure the keystore is group readable
chmod g+r /etc/logstash/logstash.keystore
If it wasn't permissions, you could try recreating the store without a password. If it then works, you'll want to be really careful about how you handle the password environment variable, and go over the docs with a fine-tooth comb.

Starting docker service with "sudo docker -d"

I am trying to push some image to my registry, but when i tried to do:
sudo docker push myreg:5000\image
i got some error that told me that i need to start docker daemon with
docker -d --insecure-registry myreg:5000
So i stopped the docker service, and started it using the command above, once i do that the current shell window(ssh) is stuck with docker output, and if i close it the docker service is stopped.
I know this is an easy one, and i searched for hours and couldn't find anything.
Thank you
The problem is that when i run the command, i get all the docker output to the shell, and if i close it, the docker service stopped, usually the -d should take care of it, but it wont work
I think there's a confusion here; the top-level -d (docker -d) flag starts docker in daemon mode, in the foreground. This is different from the docker run -d <image> flag, which means "start a container from <image>, in detached mode". What you're seeing on your screen, is the daemon output / logs, waiting for connections from a docker client.
Back to your original issue;
The instructions to run docker -d --insecure-registry myreg:5000 could be clearer, but they illustrate that you should change the daemon options of your docker service to include the --insecure-registry myreg:5000 option.
Depending on the process manager your system users (e.g., upstart or systemd), this means you'll have to edit the /etc/default/docker file (see the documentation), or adding a "drop-in" file to override the default systemd service options; see SystemD custom daemon options
Some notes;
The top-level -d option is deprecated in docker 1.8 in favor of the new docker daemon command
Using --insecure-registry is discouraged for security reasons as it allows both unencrypted and untrustworthy communication with the registry. It's preferable to add your CA to the trusted list of your system.

Resources