Kubernetes logs not found in default locations? - spring-boot

In my k8s environment where spring-boot applications runs, I checked log location in /var/log and /var/lib but both empty. Then I found log location in /tmp/spring.log . It seems this the default log location. My problem are
How kubectl log knows it should read logs from /tmp location. I get log output on kubectl logs command.
I have fluent-bit configured where it has input as
following
[INPUT]
Name tail
Tag kube.dev.*
Path /var/log/containers/*dev*.log
DB /var/log/flb_kube_dev.db
This suggest it should reads logs from /var/log/containers/ but it does not have logs. However i am getting fluent-bit logs successfully. What am i missing here ?

Docker logs only contain the logs that are dumped on STDOUT by your container's process with PID 1 (your container's entrypoint or cmd process).
If you want to see the logs via kubectl logs or docker logs, you should redirect your application logs to STDOUT instead of file /tmp/spring.log. Here's an excellent example of how this can achieved with minimal effort.
Alternatively, you can also use hostPath volumeMount. This way, you can directly access the log from the path on the host.
Warning when using hostPath volumeMount
If the pod is shifted to another host due to some reason, you logs will not move along with it. A new log file will be created on this new host at the same path.

If you are searching for the actual location of the logs outside the containers (and on the host nodes of the cluster), this depends on a couple things. I suppose you are using Docker to run your containers under Kubernetes, which is the most common setup.
On each node of your Kubernetes cluster, you can use the following command to check what is the logging driver being currently used:
docker info | grep -i logging
The default value should be json-file, which means that logs are being written as jsons from the containers, to a certain location on your host nodes.
If you find another driver, such as for example journald, then that means Docker logging driver is sending logs directly to the systemd journal. There are many logging drivers, so as a first check, you should be sure that all yours Kubernetes nodes are configured to log as json files (or, in the way you need to harvest them).
Once this is done, you can start checking where your containers are logging their own log. Choose a Pod to analyze, then:
Identify on which Kubernetes node it is running on
kubectl get pod pod-name -owide
Grab the container ID with something like the following
kubectl get pod pod-name -ojsonpath='{.status.containerStatuses[0].containerID}'
Where the id should be something in the shape of docker://f834508490bd2b248a2bbc1efc4c395d0b8086aac4b6ff03b3cc8fd16d10ce2c
Remove the docker:// part and SSH on the Kubernetes node on which this container is running, then do a
docker inspect container-id | grep -i logpath
Which should give you the log locations for that particular container. You can try tail on the file to check if the logs are really there or not.
In my case, the container I tried this procedure on, was logging inside:
/var/lib/docker/containers/289271086d977dc4e2e0b80cc28a7a6aca32c888b7ea5e1b5f24b28f7601ff63/289271086d977dc4e2e0b80cc28a7a6aca32c888b7ea5e1b5f24b28f7601ff63-json.log

Related

Path settings configuration for Logstash as a Service

I want to process my logs from my db to Kibana via Logstash. Presently I am able to manually update the logs by calling the command: sudo /usr/share/logstash/bin/logstash -f /data/Logstash_Config/Logstash_Config_File_for_TestReport.conf --pipeline.workers 1 --path.settings "/etc/logstash"
Now, I want to automate the process by using Logstash as a Service. I understand that by placing the path.settings parameter in either the config file or other corresponding file should solve the issue, but I am not able to process further.

What to do when Memgraph stops working without any info?

Sometimes the Docker container where Memgraph is running just stops working or says that the process was aborted with exit code 137. How can I fix this?
You should check the Memgraph logs, where you'll probably find the reason why the process was aborted.
Since you said that you're using Memgraph with Docker, there are two options:
If you run Memgraph with Docker using the volume for logs, that is with
-v mg_log:/var/log/memgraph, then mg_log folder usually can be found at \\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\ (Windows) or /var/lib/docker/volumes/ (Linux and macOS).
If you run Memgraph without using the volume for logs, then you need to enter the Docker container. In order to do that, first you have to find out the container ID by running docker ps. Then you have to copy the container ID and run docker exec -it <containerID> bash. For example, if container ID is 83d76fe4df5a, then you run docker exec -it 83d76fe4df5a bash. Next, you need find the folder where logs are located. You can do that by running cd /var/log/memgraph. To read the logs, run cat <memgraph_date>.log, that is, if you have log file memgraph_2022-03-02.log located inside the log folder, then run cat memgraph_2022-03-02.log.
Hopefully, when you read the logs, you'll be able to fix your problem.

Access k8s pod logs generated from ssh exec

I have a filebeat configured to send my k8s cluster logs to Elasticsearch.
When I connect to the pod directly (kubectl exec -it <pod> -- sh -c bash),
the generated output logs aren't being sent to the destination.
Digging at k8s docs, I couldn't find how k8s is handling STDOUT from a running shell.
How can I configure k8s to send live shell logs?
Kubernetes has (mostly) nothing to do with this, as logging is handled by the container environment used to support Kubernetes, which is usually docker.
Depending on docker version, logs of containers could be written on json-file, journald or more, with the default being a json file. You can do a docker info | grep -i logging to check what is the Logging Driver used by docker. If the result is json-file, logs are being written down on a file in json format. If there's another value, logs are being handled in another way (and as there are various logging drivers, I suggest to check the documentation about them)
If the logs are being written on file, chances are that by using docker inspect container-id | grep -i logpath, you'll be able to see the path on the node.
Filebeat simply harvest the logs from those files and it's docker who handles the redirection between the application STDOUT inside the container and one of those files, with its driver.
Regarding exec commands not being in logs, this is an open proposal ( https://github.com/moby/moby/issues/8662 ) as not everything is redirected, just logs of the apps started by the entrypoint itself.
There's a suggested workaround which is ( https://github.com/moby/moby/issues/8662#issuecomment-277396232 )
In the mean time you can try this little hack....
echo hello > /proc/1/fd/1
Redirect your output into PID 1's (the docker container) file
descriptor for STDOUT
Which works just fine but has the problem of requiring a manual redirect.
Use the following process:
Make changes in your application to push logs to STDOUT. You may configure this in your logging configuration file.
Configure file to read those STDOUT logs (which eventual is some docker log file location like /var/log etc)
Start your file as a DeamonSets, so that logs from new pods and nodes can be anatomically pushed to ES.
For better readability of logs, make sure you push logs in json format.

Docker: "Cannot access '/var/lib/docker/containers': No such file or directory" error [duplicate]

I'm using Windows 10 with native docker installation.
I'm looking for the location where docker save the containers logs.
In Linux, the Docker containers log files are in this location:
/var/lib/docker/containers/container-id/container-id-json.log
But where can I find it in windows 10 ?
For Windows 10 + WSL 2 (Ubuntu 20.04), Docker version 20.10.2, build 2291f61
Lets DOCKER_ARTIFACTS == \\wsl$\docker-desktop-data\version-pack-data\community\docker
Container logs can be found in the following location
DOCKER_ARTIFACTS\containers\[Your_container_ID]\[Your_container_ID]-json.log
Here is an example :
Check first if those logs are in (as suggested here):
C:\ProgramData\docker\containers\[container_ID]\[container_ID]-json.log
The Docker C:\ProgramData\docker is the Root Dir reported by docker info.
Regarding Docker Linux through Hyper-v, check if "How to Delete Docker Container Log Files (Windows or Linux) " can help (from Jon Gallant):
Run docker inspect to find your Docker log file location
Find the “Docker Root Dir” Value, mine is /var/lib/docker
Your docker log file path should be /var/lib/docker, but if it isn’t, then change it in the command below.
find /var/lib/docker/containers/ -type f -name "*.log"
The command you see in this image is based on "How to SSH into the Docker VM (MobyLinuxVM) on Windows"
We aren’t technically going to SSH into the VM, we’ll create a container that has full root access and then access the file system from there.
Get container with access to Docker Daemon
Run container with full root access
Switch to host file system
Open a Command prompt and execute the following:
docker run --privileged -it -v /var/run/docker.sock:/var/run/docker.sock jongallant/ubuntu-docker-client
docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
chroot /host
Execute the find command there, and you should find the logs.
For Windows 10 + Docker Desktop version 3.6.0, the virtual path for logs and data (artifacts) is \\wsl$\docker-desktop-data\version-pack-data\community\docker (you can copy/paste it in Explorer navigation bar).
The logs are at \\wsl$\docker-desktop-data\version-pack-data\community\docker\containers\[containerID]\[containerID]-json.logs
and the data is under \\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\[volumeID]\_data
How to get containerID:
You can find container ID (truncated) by running docker ps in a command prompt. You can also find it by clicking the CLI button on DockerDesktop next to the container name, the id will be in the title of the cmd window that will pop up.
Once you have the id, you can navigate to containers\[containerID] under the artifacts directory (\\wsl$\docker-... above). The log file will have the .log extension and its name will have the containerID in it. Keep in mind that it will be an enriched json format though, so not easily readable.
How to get volumeID:
To find a container's data (for example kafka broker topics), you need to find the id of the volumes where the data is stored. For that you will need to click on the container in DockerDesktop, then click INSPECT (top right). You can then scroll down to find Mounts configuration entries. Each mount will have a volumeID (different from containerID), and that volumeID will be part of a path similar to this one /var/lib/docker/volumes/71f7a5992c58fdcf229c3848acb014712f34fab380bc7c712cf5a0a632fe9110/_data. volumeID here being 71f7a5992c58fdcf229c3848acb014712f34fab380bc7c712cf5a0a632fe9110.
You can then take volumeID and navigate to volumes\[volumeID] under the artifacts directory (\\wsl$\docker-... above) where the data will be located.
For me, using Docker Desktop for Windows on version 4.9.1 (81317), Windows 10 21H2, WSL 2 mode, the containers' folders were at
\\wsl$\docker-desktop-data\data\docker\containers
Slightly different from the others.
For Windows Users who wants to delete all Docker log files on WSL 2.
The path to the docker container is correct thx to #craftsmannadeem
\\wsl$\docker-desktop-data\version-pack-data\community\docker\containers
Here a command to execute on Windows to delete all log files:
del /s \\wsl$\docker-desktop-data\version-pack-data\community\docker\containers\*-json.log
Bye bye Docker logs:
File was deleted - \\wsl$\docker-desktop-data\version-pack-data\community\docker\containers\2012efd0ccfb8aed6291dd9a3b7b5aef507b6af4fce5b85e9306f45980db9531\2012efd0ccfb8aed6291dd9a3b7b5aef507b6af4fce5b85e9306f45980db9531-json.log
File was deleted - \\wsl$\docker-desktop-data\version-pack-data\community\docker\containers\9e627f1fe8f3c3ab85c64f85f93942d1f077e9a6e2896b51df782b0c0c3777d1\9e627f1fe8f3c3ab85c64f85f93942d1f077e9a6e2896b51df782b0c0c3777d1-json.log
File was deleted - \\wsl$\docker-desktop-data\version-pack-data\community\docker\containers\6ea8f3cb354c199bc719701f8f1e75c333f81cd2f03dca0c7a626cbcbf9ed5a0\6ea8f3cb354c199bc719701f8f1e75c333f81cd2f03dca0c7a626cbcbf9ed5a0-json.log
...
For window system logs are located at C:\ProgramData\Docker\containers or %APPDATA%\Docker
For Linux system logs are located at /var/lib/docker/containers
I couldn't find where the logs were stored locally. (Good chance they aren't plain text any more. However, if you just need the output of the logs, you can run a command like this:
docker logs --details [container-name] > container-name.log
This will grab the logs for the container and write them to a log file in the current directory.
Note: The --details adds additional info to the logs like environment variables and the like, but is not required for the command to work.
If you use docker-compose with windows+wsl, In my case the log monitoring agent (also running as a container in docker) was not able to find the log files eventhough the path for logs was mounted as volume.
volumes:
- /var/run/docker.sock:/var/run/docker.sock
-/var/lib/docker:/var/lib/docker
The log monitoring agent could not find logs in
/var/lib/docker/containers/**/*.log
The problem in my case was, I was running docker-compose up command for the log monitoring agent from within wsl shell. When I ran it from windows powershell or cmd, the agent was able to find the logs in mounted path.

Filebeat : data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path

Looking at the logs in one of the filebeat pods i can see this:
2021-01-04T10:10:52.754Z DEBUG [add_cloud_metadata] add_cloud_metadata/providers.go:129 add_cloud_metadata: fetchMetadata ran for 2.351101ms
2021-01-04T10:10:52.754Z INFO [add_cloud_metadata] add_cloud_metadata/add_cloud_metadata.go:93 add_cloud_metadata: hosting provider type detected as openstack, metadata={"ava
ilability_zone":"us-east-1c","instance":{"id":"i-08f536567bd9945df","name":"ip-10-101-2-178.ec2.internal"},"machine":{"type":"m5.2xlarge"},"provider":"openstack"}
2021-01-04T10:10:52.755Z DEBUG [processors] processors/processor.go:120 Generated new processors: add_cloud_metadata={"availability_zone":"us-east-1c","instance":{"id":"i-08f5
36567bd9945df","name":"ip-10-101-2-178.ec2.internal"},"machine":{"type":"m5.2xlarge"},"provider":"openstack"}, add_docker_metadata=[match_fields=[] match_pids=[process.pid, process.ppid]]
2021-01-04T10:10:52.755Z INFO instance/beat.go:392 filebeat stopped.
2021-01-04T10:10:52.755Z ERROR instance/beat.go:956 Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (pat
h.data).
Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
as you can see the filebeat stopped with an error :
data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
After searching the problem in github/forum i found this :
https://discuss.elastic.co/t/data-path-already-locked-by-another-beat/219852/4
Which looks like my problem,
Im using the default filebeat-kubernetes.yaml , and there is no information in ELK / Filebeats docs on how to add unique paths in the filebeat-kubernetes.yaml
where do i add them and how do i make them unique?
Thanks
I had the same problem. It means that your data path (/var/lib/filebeats) are locked by another filebeat instance. So execute sudo systemctl stop filebeat (in my case) to ensure that you don't have running filebeat
and then run filebeat with sudo filebeat -e which prints logs in console
I also tried link, that you shared, but it didn't help me. Here another solutions, may be it would help you: https://discuss.elastic.co/t/data-path-already-locked-by-another-beat/219852/2
In addition to #Anton's answer, In one of the scenarios, I had a lock file in the data path. This could be /var/lib/filebeat/filebeat.lock depending on the configuration. Delete the file and run sudo filebeat -e
If you want to run Elastic stack as a service, the solution is just to restart all of the stack in this order:
Elasticsearch
Kibana
Logstash
Filebeat(s)
which is already suggested in this link.

Resources