Is it possible for stackdriver to recognise syslog input from k8s? - go

Unable to get stack driver to recognize syslog levels. Everything appears as an error despite specifying DEBUG
logwriter, e := syslog.New(syslog.LOG_DEBUG, "myprog")
if e == nil {
log.SetOutput(logwriter)
}
log.Print("log me")
I am aware of format requirements
if I stdout correct format as json payload stackdriver will magically pick it up and it works for me.
But why can't stackdriver recognize syslog input even if I syslog the json payload?

Syslog is a different protocol.
Try the golang driver.
This is tagged with k8s... if you need your kubernetes logs then just use the exporter. If you're just trying to get pod logs, you can send to STDOUT and the below will forward for you.

Related

How to output logs in a containerized application?

Normally, in docker/k8s, it's recommended that directly ouput the logs to the stdout.
Then we can use kubectl logs or docker logs to see the logs.
like: time=123 action=write msg=hello world , as a tty, it might be colorized for human friendliness.
However, if we want to export the logs to a log processing center, like EFK (elasticsearch-fluentd-kibana), we need a json-format log file.
like: {"time"=123,"action"="write","msg"="hello world"}
What I want?
Is there a log method that can take into account both human friendliness and json format?
I'm looking for a way that if I use docker logs, then I can get human-readable logs, and at the same time, the log collector can still get the logs in json-format
Conclusion
Thanks for the answer below. I have got 2 methods:
different log format in different env:
1.1 use text-format in developing: docker logs will print colorized and human readable logs.
1.2 use json-format in production: EFK can process json-format well.
log collector's format convertion
2.1 we use text-format, but in log collector like fluentd we can define some scripts to translate text-format kv pair to json-format kv pair.
Kubernetes has such option of structured logging for its system components.
Klog library allows to use --logging-format=json flag that enables to change the format of logs to JSON output - more information about it here and here.
yes you can do that with Flunetd, below are the basic action items that you need to take to finalize this setup
Configure docker container to log to stdout (you can use any format you like)
Configure FluentD to tail docker files from /var/lib/docker/containers/*/*-json.log
parse logs with Flunetd and change the format to JSON logs
output the logs to Elasticsearch.
This article shows exactly how to do this setup also this one explain how to parse Key-Value logs

send payara logs to graylog via syslog and set correct source

I have a graylog instance that's running a UDP-Syslog-Input on Port 1514.
It's working wonderfully well for all the system logs of the linux servers.
When I try to ingest payara logs though [1], the "source" of the message is set to "localhost" in graylog, while it's normally the hostname of the sending server.
This is suboptimal, because in the best case I want the application logs with correct source in graylog also.
I googled around and found:
https://github.com/payara/Payara/blob/payara-server-5.2021.5/nucleus/core/logging/src/main/java/com/sun/enterprise/server/logging/SyslogHandler.java#L122
It seems like the syslog "source" is hard-coded into payara (localhost).
Is there a way to accomplish sending payara-logs with the correct "source" set?
I have nothing to do with the application server itself, I just want to receive the logs with the correct source (the hostname of the sending server).
example log entry in /var/log/syslog for payara
Mar 10 10:00:20 localhost [ INFO glassfish ] Bootstrapping Monitoring Console Runtime
I suspect I want the "localhost" in above example set to fqdn of the host.
Any ideas?
Best regards
[1]
logging.properties:com.sun.enterprise.server.logging.SyslogHandler.useSystemLogging=true
Try enabling "store full message" in the syslog input settings.
That will add the full_message field to your log messages and will contain the header, in addition to what you see in the message field. Then you can see if the source IP is in the UDP packet. If so, collect those messages via a raw/plaintext UDP input and the source should show correctly.
You may have to parse the rest of the message via an extractor or pipeline rule, but at least you'll have the source....
Well,
this might not exactly be a good solution but I tweaked the rsyslog template for graylog.
I deploy the rsyslog-config via Puppet, so I can generate "$YOURHOSTNAME-PAYARA" dynamically using the facts.
This way, I at least have the correct source set.
$template GRAYLOGRFC5424,"<%PRI%>%PROTOCOL-VERSION% %TIMESTAMP:::date-rfc3339% YOURHOSTNAME-PAYARA %APP-NAME% %PROCID% %MSGID% %STRUCTURED-DATA% %msg%\n"
if $msg contains 'glassfish' then {
*.* #loghost.domain:1514;GRAYLOGRFC5424
& ~
} else {
*.* #loghost.domain:1514;RSYSLOG_SyslogProtocol23Format
}
The other thing we did is actually activating application logging through log4j and it's syslog appender:
<Syslog name="syslog_app" appName="DEMO" host="loghost" port="1514" protocol="UDP" format="RFC5424" facility="LOCAL0" enterpriseId="">
<LoggerFields>
<KeyValuePair key="thread" value="%t"/>
<KeyValuePair key="priority" value="%p"/>
<KeyValuePair key="category" value="%c"/>
<KeyValuePair key="exception" value="%ex"/>
</LoggerFields>
</Syslog>
This way, we can ingest the glassfish server logs and the independent application logs into graylog.
The "LoggerFields" in log4j.xml appear to be key-value pairs for the "StructuredDataElements" according to RFC5424.
https://logging.apache.org/log4j/2.x/manual/appenders.html
https://datatracker.ietf.org/doc/html/rfc5424
That's the problem with UDP Syslog. The sender gets to set the source in the header. There is no "best answer" to this question. When the information isn't present, it's hard for Graylog to pass it along.
It sounds like you may have found an answer that works for you. Go with it. Using log4j solves two problems and lets you define the source yourself.
For those who face a similar issue, a simpler way to solve the source problem might be to use a static field. If you send the payara syslog messages to their own input, you can create a static field that could substitute for the source to identify traffic from that source. Call it "app_name" or "app_source" or something and use that field for whatever sorting you need to do.
Alternatively, if you have just one source for application messages, you could use a pipeline to set the value of the source field to the IP or FQDN of the payara server. Then it displays like all the rest.

Logs from GCF in Go doesn't contains log level

I am trying send info/error logs to StackDriver Logging on GCP from Cloud Function written in Go, however all logs dosen't have log level assignment.
I have created function from https://github.com/GoogleCloudPlatform/golang-samples/blob/master/functions/helloworld/hello_logging.go
to demonstrate problem.
Cloud Support here! What you are trying to do is not possible, as specified in the documentation:
Logs to stdout or stderr do not have an associated log level.
Internal system messages have the DEBUG log level.
What you probably need is to use Logging API, specifically the Log Levels section.
If this is not working for you either you could try using node.js instead of go as it currently emits logs to INFO and ERROR levels using console.log() and console.error() respectively.
I created a package to do exactly this:
github.com/ncruces/go-gcp/glog
It works on App Engine, Kubernetes Engine, Cloud Run, and Cloud Functions. Supports setting logging levels, request/tracing metadata, and structured logging.
Usage:
func HelloWorld(w http.ResponseWriter, r *http.Request) {
glog.Infoln("Hello logs")
glog.Errorln("Hello logs")
// or, to set request metadata:
logger := glog.ForRequest(r)
logger.Infoln("Hello logs")
}
you can use external library which allow you to set the log level. Here an example of what I use.
Use logrus for setting the minimal log level (you can improve this code by providing log level in env var) and joonix for fluentd formatter.(line 25)
A point of attention. Line 11, I rename the logrus package log
log "github.com/sirupsen/logrus"
Thus, don't use log standard library, but logrus library. It's sometime boring... you can simply replace log by logrus for avoiding all confusion.
Joonix is a fluentD formatter for transforming the logs into an ingestable format for Stackdriver.
I, too, took a stab at creating a package for this. If you use logrus (https://github.com/sirupsen/logrus), then this may be helpful:
https://github.com/tekkamanendless/gcfhook
I'm using this in production, and it's working just fine.
I recommend the https://github.com/apsystole/log. It's also log- and logrus-compatible, but a small zero-dependency module unlike the libraries used in two existing answers which bring 400+ modules as their dependencies (gasp... I am simply looking at go mod graph).

How do I set google stackdriver to respect logging severity from kubernetes?

I deployed a go application in google cloud using kubernetes which automatically logs to google stackdriver. Oddly, all log statements are being tagged with severity "ERROR"
For example:
log.Println("This should have log level info")
will be tagged as an error.
Their docs say "Severities: By default, logs written to the standard output are on the INFO level and logs written to the standard error are on the ERROR level."
Anyone know what could be wrong with my setup?
Take a look at this logging package: github.com/teltech/logger, with an accompanying blog post. It will output your logs in a JSON format, including the severity, that is readable by the Stackdriver Fluentd agent.

Capture my Go application logs into fluentd

Currently I have a go web application containing over 50 .go files. Each file writes logs on STDOUT for now.
I want to use fluentd to capture these logs and then send them to elasticsearch/kibana.
I search on internet for solution to this. There is one package https://github.com/fluent/fluent-logger-golang .
To use this I would need to change my whole logging related code in each go file.
And there would be many data structures that I would need to Post to fluentd.
Shortly speaking I dont want to use this approach.
Please let me know if there are any other ways to do this.
Thank you
Ideally (at least in my opinion), you would essentially just pipe stdout to Fluentd.
If you happen to be also using Docker for your application you can do this easily using the built in logging drivers:
https://docs.docker.com/engine/admin/logging/overview/
Otherwise, there seem to be a few options to help get stdout to Fluentd:
12Factor App: Capturing stdout/stderr logs with Fluentd

Resources