Capture my Go application logs into fluentd - elasticsearch

Currently I have a go web application containing over 50 .go files. Each file writes logs on STDOUT for now.
I want to use fluentd to capture these logs and then send them to elasticsearch/kibana.
I search on internet for solution to this. There is one package https://github.com/fluent/fluent-logger-golang .
To use this I would need to change my whole logging related code in each go file.
And there would be many data structures that I would need to Post to fluentd.
Shortly speaking I dont want to use this approach.
Please let me know if there are any other ways to do this.
Thank you

Ideally (at least in my opinion), you would essentially just pipe stdout to Fluentd.
If you happen to be also using Docker for your application you can do this easily using the built in logging drivers:
https://docs.docker.com/engine/admin/logging/overview/
Otherwise, there seem to be a few options to help get stdout to Fluentd:
12Factor App: Capturing stdout/stderr logs with Fluentd

Related

Best way to concatenate/collapse stacktraces in logs

My goal is to include my stacktrace and log message into a single log message for my Spring Boot applications. The problem I'm running into is each line of the stacktrace is a separate log message. I want to be able to search my logs for a log level of ERROR and find the log message with the stacktrace. I've found two solutions but not sure which to use.
I can use Logback to put them all in one line but would like to keep the new lines for a pretty format. Also the guide I found might override defaults that I want to keep. https://fabianlee.org/2018/03/09/java-collapsing-multiline-stack-traces-into-a-single-log-event-using-spring-backed-by-logback-or-log4j2/
I could also use ECS and concatenate it there, but it could affect other logs (though I think we only have Java apps). https://docs.aws.amazon.com/AmazonECS/latest/developerguide/firelens-concatanate-multiline.html
Which would be the best way to do it? Also is there a better way to do it in Spring compared to the guide that I found?

How to output logs in a containerized application?

Normally, in docker/k8s, it's recommended that directly ouput the logs to the stdout.
Then we can use kubectl logs or docker logs to see the logs.
like: time=123 action=write msg=hello world , as a tty, it might be colorized for human friendliness.
However, if we want to export the logs to a log processing center, like EFK (elasticsearch-fluentd-kibana), we need a json-format log file.
like: {"time"=123,"action"="write","msg"="hello world"}
What I want?
Is there a log method that can take into account both human friendliness and json format?
I'm looking for a way that if I use docker logs, then I can get human-readable logs, and at the same time, the log collector can still get the logs in json-format
Conclusion
Thanks for the answer below. I have got 2 methods:
different log format in different env:
1.1 use text-format in developing: docker logs will print colorized and human readable logs.
1.2 use json-format in production: EFK can process json-format well.
log collector's format convertion
2.1 we use text-format, but in log collector like fluentd we can define some scripts to translate text-format kv pair to json-format kv pair.
Kubernetes has such option of structured logging for its system components.
Klog library allows to use --logging-format=json flag that enables to change the format of logs to JSON output - more information about it here and here.
yes you can do that with Flunetd, below are the basic action items that you need to take to finalize this setup
Configure docker container to log to stdout (you can use any format you like)
Configure FluentD to tail docker files from /var/lib/docker/containers/*/*-json.log
parse logs with Flunetd and change the format to JSON logs
output the logs to Elasticsearch.
This article shows exactly how to do this setup also this one explain how to parse Key-Value logs

Logs from GCF in Go doesn't contains log level

I am trying send info/error logs to StackDriver Logging on GCP from Cloud Function written in Go, however all logs dosen't have log level assignment.
I have created function from https://github.com/GoogleCloudPlatform/golang-samples/blob/master/functions/helloworld/hello_logging.go
to demonstrate problem.
Cloud Support here! What you are trying to do is not possible, as specified in the documentation:
Logs to stdout or stderr do not have an associated log level.
Internal system messages have the DEBUG log level.
What you probably need is to use Logging API, specifically the Log Levels section.
If this is not working for you either you could try using node.js instead of go as it currently emits logs to INFO and ERROR levels using console.log() and console.error() respectively.
I created a package to do exactly this:
github.com/ncruces/go-gcp/glog
It works on App Engine, Kubernetes Engine, Cloud Run, and Cloud Functions. Supports setting logging levels, request/tracing metadata, and structured logging.
Usage:
func HelloWorld(w http.ResponseWriter, r *http.Request) {
glog.Infoln("Hello logs")
glog.Errorln("Hello logs")
// or, to set request metadata:
logger := glog.ForRequest(r)
logger.Infoln("Hello logs")
}
you can use external library which allow you to set the log level. Here an example of what I use.
Use logrus for setting the minimal log level (you can improve this code by providing log level in env var) and joonix for fluentd formatter.(line 25)
A point of attention. Line 11, I rename the logrus package log
log "github.com/sirupsen/logrus"
Thus, don't use log standard library, but logrus library. It's sometime boring... you can simply replace log by logrus for avoiding all confusion.
Joonix is a fluentD formatter for transforming the logs into an ingestable format for Stackdriver.
I, too, took a stab at creating a package for this. If you use logrus (https://github.com/sirupsen/logrus), then this may be helpful:
https://github.com/tekkamanendless/gcfhook
I'm using this in production, and it's working just fine.
I recommend the https://github.com/apsystole/log. It's also log- and logrus-compatible, but a small zero-dependency module unlike the libraries used in two existing answers which bring 400+ modules as their dependencies (gasp... I am simply looking at go mod graph).

Is there a way in Laravel to send an output to both console and log?

I am writing a command which has a lot of service information that I need to see during the command is running.
I am outputing this info simply running echo "some text", and that way I can see what happens when running this command. When the same command is run with scheduler I have to log all this info. So I have to duplicate all the same messages with: Log::info("some text").
If I want to avoid duplication I can create a helper class that can have all this, that I then include in all the service classes that are related to this command and use this helper class to avoid code duplication, but I still feel that this is not ideal solution. Is there maybe a built in way in Laravel on how to sent to console output and Log at the same time?
You could add: ->appendOutputTo('path')); when running your task that execute your command, to store the output messages in your log file. Although, I'm not sure if this will log all console I/O (it will be good to know in case you test it).
Check this.

Logging for two different environment logs in to a single log file

I am quite new for log4j2 logger and my requirement to write a log from application server and web server.
I am having two different environment on which J BOSS server is deployed.
Now I am having a log file on web server environment which is writing logs for errors and I want to write logs from application server also in same file.
Please suggest.
If you want the logs to be integrated together you should use a solution like Splunk or Elastic Search/Logstash/Kibana (ELK).
When you try to write to a file from 2 different processes your file will get corrupted unless you use file locking. However, your throughput will decrease significantly and it isn't supported for rolling files. So the best approach is to send the logs to a single process where they can be aggregated.

Resources