journalctl stops logging systemd service logs with systemd_unit field - systemd

This is a question about journalctl and how systemd log entries are produced.
My OS is RHEL8.
I've got a systemd service setup to run kubelet as a systemd service, and I would like to use journalctl to tail the logs using the unit flag with "journalctl -u kubelet".
When I initially start the systemd service, I can see kubelet logs show up in /var/log/messages and I can also filter them using "journalctl -u kubelet". However, very shortly after the service is started, journalctl goes quiet when filtering with "-u kubelet", yet kubelet logs continue to be dumped into /var/log/messages. If I filter journalctl with "journalctl --identifier kubelet" instead of the "-u kubelet" I do see all the logs that are in /var/log/messages.
Using "-o json-pretty", I can see that the initial logs produced by the kubelet process have journald log entries with:
"_SYSTEMD_CGROUP" : "/system.slice/kubelet.service",
"_SYSTEMD_INVOCATION_ID" : "b422558179854d55a44d0ea6f7240828",
"_SYSTEMD_SLICE" : "system.slice",
"_SYSTEMD_UNIT" : "kubelet.service",
Logs produced shortly after starting the service is started seem to drop the unit property, and look like:
"_SYSTEMD_CGROUP" : "/systemd/system.slice",
"_SYSTEMD_INVOCATION_ID" : "b422558179854d55a44d0ea6f7240828",
"_SYSTEMD_SLICE" : "-.slice",
I think the fact that the logs start being produced without the "_SYSTEMD_UNIT" property indicates why filtering them with "-u" stops working, but I'd like to know why my service initially starts producing logs with the unit property, and then stops. Any clues would be appreciated.

Turns out this had more to do with kubelet configuration than it did journald configuration. kubelet needed to designate the correct cgroup in the kubeletCgroups config.

Related

Kubernetes logs not found in default locations?

In my k8s environment where spring-boot applications runs, I checked log location in /var/log and /var/lib but both empty. Then I found log location in /tmp/spring.log . It seems this the default log location. My problem are
How kubectl log knows it should read logs from /tmp location. I get log output on kubectl logs command.
I have fluent-bit configured where it has input as
following
[INPUT]
Name tail
Tag kube.dev.*
Path /var/log/containers/*dev*.log
DB /var/log/flb_kube_dev.db
This suggest it should reads logs from /var/log/containers/ but it does not have logs. However i am getting fluent-bit logs successfully. What am i missing here ?
Docker logs only contain the logs that are dumped on STDOUT by your container's process with PID 1 (your container's entrypoint or cmd process).
If you want to see the logs via kubectl logs or docker logs, you should redirect your application logs to STDOUT instead of file /tmp/spring.log. Here's an excellent example of how this can achieved with minimal effort.
Alternatively, you can also use hostPath volumeMount. This way, you can directly access the log from the path on the host.
Warning when using hostPath volumeMount
If the pod is shifted to another host due to some reason, you logs will not move along with it. A new log file will be created on this new host at the same path.
If you are searching for the actual location of the logs outside the containers (and on the host nodes of the cluster), this depends on a couple things. I suppose you are using Docker to run your containers under Kubernetes, which is the most common setup.
On each node of your Kubernetes cluster, you can use the following command to check what is the logging driver being currently used:
docker info | grep -i logging
The default value should be json-file, which means that logs are being written as jsons from the containers, to a certain location on your host nodes.
If you find another driver, such as for example journald, then that means Docker logging driver is sending logs directly to the systemd journal. There are many logging drivers, so as a first check, you should be sure that all yours Kubernetes nodes are configured to log as json files (or, in the way you need to harvest them).
Once this is done, you can start checking where your containers are logging their own log. Choose a Pod to analyze, then:
Identify on which Kubernetes node it is running on
kubectl get pod pod-name -owide
Grab the container ID with something like the following
kubectl get pod pod-name -ojsonpath='{.status.containerStatuses[0].containerID}'
Where the id should be something in the shape of docker://f834508490bd2b248a2bbc1efc4c395d0b8086aac4b6ff03b3cc8fd16d10ce2c
Remove the docker:// part and SSH on the Kubernetes node on which this container is running, then do a
docker inspect container-id | grep -i logpath
Which should give you the log locations for that particular container. You can try tail on the file to check if the logs are really there or not.
In my case, the container I tried this procedure on, was logging inside:
/var/lib/docker/containers/289271086d977dc4e2e0b80cc28a7a6aca32c888b7ea5e1b5f24b28f7601ff63/289271086d977dc4e2e0b80cc28a7a6aca32c888b7ea5e1b5f24b28f7601ff63-json.log

Access k8s pod logs generated from ssh exec

I have a filebeat configured to send my k8s cluster logs to Elasticsearch.
When I connect to the pod directly (kubectl exec -it <pod> -- sh -c bash),
the generated output logs aren't being sent to the destination.
Digging at k8s docs, I couldn't find how k8s is handling STDOUT from a running shell.
How can I configure k8s to send live shell logs?
Kubernetes has (mostly) nothing to do with this, as logging is handled by the container environment used to support Kubernetes, which is usually docker.
Depending on docker version, logs of containers could be written on json-file, journald or more, with the default being a json file. You can do a docker info | grep -i logging to check what is the Logging Driver used by docker. If the result is json-file, logs are being written down on a file in json format. If there's another value, logs are being handled in another way (and as there are various logging drivers, I suggest to check the documentation about them)
If the logs are being written on file, chances are that by using docker inspect container-id | grep -i logpath, you'll be able to see the path on the node.
Filebeat simply harvest the logs from those files and it's docker who handles the redirection between the application STDOUT inside the container and one of those files, with its driver.
Regarding exec commands not being in logs, this is an open proposal ( https://github.com/moby/moby/issues/8662 ) as not everything is redirected, just logs of the apps started by the entrypoint itself.
There's a suggested workaround which is ( https://github.com/moby/moby/issues/8662#issuecomment-277396232 )
In the mean time you can try this little hack....
echo hello > /proc/1/fd/1
Redirect your output into PID 1's (the docker container) file
descriptor for STDOUT
Which works just fine but has the problem of requiring a manual redirect.
Use the following process:
Make changes in your application to push logs to STDOUT. You may configure this in your logging configuration file.
Configure file to read those STDOUT logs (which eventual is some docker log file location like /var/log etc)
Start your file as a DeamonSets, so that logs from new pods and nodes can be anatomically pushed to ES.
For better readability of logs, make sure you push logs in json format.

Gracefully stop Phusion Passenger running on apache

I have a docker container with apache running in foreground. On stopping the docker container , a SIGTERM is sent to all the child processes , which is apache in our case.
Now, the problem i am facing is to gracefully shutdown apache on receiving SIGTERM signal.
Apache normally terminates on the current requests immediately which is the main cause of the problem . Somehow, i need to translate the SIGTERM signal to SIGWINCH , which would eventually gracefully shutdown the server.
I was thinking of writing some kind of wrapper script , but couldn't get as to how to start.
Any suggestions in this regard would be highly appreciated!
Thanks.
The tomcat inside of container can be stopped gracefully by issuing below command (change tomcat path if needed):
docker exec -it <container id / name> /usr/local/apache2/bin/apachectl -k graceful
And to your comment, if you want to see the tomcat log in case if it is not running in foreground
docker exec -it <container id / name> tail -f tail -f /usr/local/apache2/logs/error_log
UPDATE: Based on the comments.
From the docker documentation, you may specify the time while stopping the docker container. By default, it will only wait for 10 sec.
To stop container with different timeout:
docker stop -t <time in seconds> <container id/ name>
I believe that, increasing time out while stopping might help in your case.
UPDATE2 sending custom signal, SIGWINCH in your case. Please refer here for more details.
docker kill -s SIGWINCH <apache container id / name>
UPDATE3
There are helpful resources on signal trapping:
https://medium.com/#gchudnov/trapping-signals-in-docker-containers-7a57fdda7d86#.qp68kskwd
http://www.techbar.me/stopping-docker-containers-gracefully/
Hope these are helpful.

CoreOS EnvironmentFile directive failing

I am attempting to start gliderlabs/registrator and have it connect to consul on the COREOS_PRIVATE_IPV4 ip address.
[Unit]
Description=registrator
After=consul-server#%i.service
Requires=consul-server#%i.service
[Service]
EnvironmentFile=/etc/environment
ExecStartPre=-/usr/bin/docker kill registrator
ExecStartPre=-/usr/bin/docker rm registrator
ExecStartPre=/usr/bin/docker pull gliderlabs/registrator
ExecStart=/usr/bin/docker run --volume=/var/run/docker.sock:/var/run/docker.sock --net=host --hostname ${HOSTNAME} --name=registrator gliderlabs/registrator:latest consul://${COREOS_PRIVATE_IPV4}:8500
ExecStop=/usr/bin/docker stop registrator
[X-Fleet]
#Global=true
I am running into an error on starting the service that complains about the EnvironmentFile directive.
Dec 13 16:23:41 core-01 systemd[1]: [/run/fleet/units/registrator.service:5] Unknown lvalue 'EnvironmentFile' in section 'Unit'
I am currently running coreos 835.9.0. Does anyone have any thoughts on why this might be failing?
The unit provided does not match up with what the error is returning. The error is essentially saying that the EnvironmentFile=option is in the [Unit] section of your systemd-unit, and that the option isn't valid within that section.
Either what you actually have in the unit is different from what you put here, or it's possible fleet messed it up when it parsed and rendered out the unit.
If you look at the file in /run/fleet/units/registrator.service you should be able to verify where the EnvironmentFile option is. Make sure it's in the [Service] section and not the [Unit] section.
You can also run fleetctl cat registrator.service and fleet will output the unit file definition. It's possible you submitted the unit, made changes and then didn't destroy the unit before re-submitting it.

Where is the kibana error log? Is there a kibana error log?

QUESTION: how do I debug kibana? Is there an error log?
PROBLEM 1: kibana 4 won't stay up
PROBLEM 2: I don't know where/if kibana 4 is logging errors
DETAILS:
Here's me starting kibana, making a request to the port, getting nothing, and checking the service again. The service doesn't stay up, but I'm not sure why.
vagrant#default-ubuntu-1204:/opt/kibana/current/config$ sudo service kibana start
kibana start/running, process 11774
vagrant#default-ubuntu-1204:/opt/kibana/current/config$ curl -XGET 'http://localhost:5601'
curl: (7) couldn't connect to host
vagrant#default-ubuntu-1204:/opt/kibana/current/config$ sudo service kibana status
kibana stop/waiting
Here's the nginx log, reporting when I curl -XGET from port 80, which is forwarding to port 5601:
2015/06/15 17:32:17 [error] 9082#0: *11 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: kibana, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5601/", host: "localhost"
UPDATE: I may have overthought this a bit. I'm still interested in ways to view the kibana log, however! Any suggestions are appreciated!
I've noticed that when I run kibana from the command-line, I see errors that are more descriptive than a "Connection refused":
vagrant#default-ubuntu-1204:/opt/kibana/current$ bin/kibana
{"#timestamp":"2015-06-15T22:04:43.344Z","level":"error","message":"Service Unavailable","node_env":"production","error":{"message":"Service Unavailable","name":"Error","stack":"Error: Service Unavailable\n at respond (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/transport.js:235:15)\n at checkRespForFailure (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/transport.js:203:7)\n at HttpConnector.<anonymous> (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/connectors/http.js:156:7)\n at IncomingMessage.bound (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)\n at IncomingMessage.emit (events.js:117:20)\n at _stream_readable.js:944:16\n at process._tickCallback (node.js:442:13)\n"}}
{"#timestamp":"2015-06-15T22:04:43.346Z","level":"fatal","message":"Service Unavailable","node_env":"production","error":{"message":"Service Unavailable","name":"Error","stack":"Error: Service Unavailable\n at respond (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/transport.js:235:15)\n at checkRespForFailure (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/transport.js:203:7)\n at HttpConnector.<anonymous> (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/connectors/http.js:156:7)\n at IncomingMessage.bound (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)\n at IncomingMessage.emit (events.js:117:20)\n at _stream_readable.js:944:16\n at process._tickCallback (node.js:442:13)\n"}}
vagrant#default-ubuntu-1204:/opt/kibana/current$
Kibana 4 logs to stdout by default. Here is an excerpt of the config/kibana.yml defaults:
# Enables you specify a file where Kibana stores log output.
# logging.dest: stdout
So when invoking it with service, use the log capture method of that service. For example, on a Linux distribution using Systemd / systemctl (e.g. RHEL 7+):
journalctl -u kibana.service
One way may be to modify init scripts to use the --log-file option (if it still exists), but I think the proper solution is to properly configure your instance YAML file. For example, add this to your config/kibana.yml:
logging.dest: /var/log/kibana.log
Note that the Kibana process must be able to write to the file you specify, or the process will die without information (it can be quite confusing).
As for the --log-file option, I think this is reserved for CLI operations, rather than automation.
In kibana 4.0.2 there is no --log-file option. If I start kibana as a service with systemctl start kibana I find log in /var/log/messages
It seems that you need to pass a flag "-l, --log-file"
https://github.com/elastic/kibana/issues/3407
Usage: kibana [options]
Kibana is an open source (Apache Licensed), browser based analytics and search dashboard for Elasticsearch.
Options:
-h, --help output usage information
-V, --version output the version number
-e, --elasticsearch <uri> Elasticsearch instance
-c, --config <path> Path to the config file
-p, --port <port> The port to bind to
-q, --quiet Turns off logging
-H, --host <host> The host to bind to
-l, --log-file <path> The file to log to
--plugins <path> Path to scan for plugins
If you use the init script to run as a service, maybe you will need to customize it.
Kibana doesn't have a log file by default. but you can set it up using log_file Kibana server property - https://www.elastic.co/guide/en/kibana/current/kibana-server-properties.html
For kibana 6.x on Windows, edit the shortcut to "kibana -l " folder must exist.

Resources