Zap logger add UUID to all logs in golang - go

I have this method used in a lambda:
import (
"os"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
)
func InitLogger() *zap.Logger {
config := zap.NewProductionEncoderConfig()
config.EncodeTime = zapcore.RFC3339TimeEncoder
consoleEncoder := zapcore.NewJSONEncoder(config)
core := zapcore.NewTee(zapcore.NewCore(consoleEncoder, zapcore.AddSync(os.Stdout), zapcore.InfoLevel))
return zap.New(core).With()
}
And in my lambda Handler i have:
var (
log *zap.Logger
)
func init() {
log = u.InitLogger()
}
func handler(r events.APIGatewayProxyRequest) (*events.APIGatewayProxyResponse, error) {
out, err := exec.Command("uuidgen").Output()
uuid := strings.ReplaceAll(string(out), "\n", "")
if err != nil {
log.Error(err.Error())
}
log.Info("PRINT_1", zap.Any("uuid", uuid), zap.Any("Request", r.Body))
}
I have a question, is possible add the UUID to all logs without adding one by one?, because in each log that I need print something, I need add zap.Any("uuid", uuid)
The problem is that I need pass as parameter to all methods the UUID to print it in the log info, or error.

You will have to slightly re-arrange your code since you're only creating the UUID in the handler, which implies it's request-specific whilst the logger is global...
But the gist, specific to the library, is that you've got to create a child logger (which you are, in fact, already doing: you just need to pass the fields there). Any subsequent log writes to the child logger will include those fields.
For example:
func main() {
logger := InitLogger(zap.String("foo", "bar"))
logger.Info("First message with our `foo` key")
logger.Info("Second message with our `foo` key")
}
func InitLogger(fields ...zap.Field) *zap.Logger {
config := zap.NewProductionEncoderConfig()
config.EncodeTime = zapcore.RFC3339TimeEncoder
consoleEncoder := zapcore.NewJSONEncoder(config)
core := zapcore.NewTee(zapcore.NewCore(consoleEncoder, zapcore.AddSync(os.Stdout), zapcore.InfoLevel))
return zap.New(core).With(fields...)
}
Output:
{"level":"info","ts":"2022-11-24T18:30:45+01:00","msg":"First message with our `foo` key","foo":"bar"}
{"level":"info","ts":"2022-11-24T18:30:45+01:00","msg":"Second message with our `foo` key","foo":"bar"}

Related

How to design classes for X number of config files which needs to be read individually in memory?

I am working with lot of config files. I need to read all those individual config file in their own struct and then make one giant Config struct which holds all other individual config struct in it.
Let's suppose if I am working with 3 config files.
ClientConfig deals with one config file.
DataMapConfig deals with second config file.
ProcessDataConfig deals with third config file.
I created separate class for each of those individual config file and have separate Readxxxxx method in them to read their own individual config and return struct back. Below is my config.go file which is called via Init method from main function after passing path and logger.
config.go
package config
import (
"encoding/json"
"fmt"
"io/ioutil"
"github.com/david/internal/utilities"
)
type Config struct {
ClientMapConfigs ClientConfig
DataMapConfigs DataMapConfig
ProcessDataConfigs ProcessDataConfig
}
func Init(path string, logger log.Logger) (*Config, error) {
var err error
clientConfig, err := ReadClientMapConfig(path, logger)
dataMapConfig, err := ReadDataMapConfig(path, logger)
processDataConfig, err := ReadProcessDataConfig(path, logger)
if err != nil {
return nil, err
}
return &Config{
ClientMapConfigs: *clientConfig,
DataMapConfigs: *dataMapConfig,
ProcessDataConfigs: *processDataConfig,
}, nil
}
clientconfig.go
package config
import (
"encoding/json"
"fmt"
"io/ioutil"
"github.com/david/internal/utilities"
)
type ClientConfig struct {
.....
.....
}
const (
ClientConfigFile = "clientConfigMap.json"
)
func ReadClientMapConfig(path string, logger log.Logger) (*ClientConfig, error) {
files, err := utilities.FindFiles(path, ClientConfigFile)
// read all the files
// do some validation on all those files
// deserialize them into ClientConfig struct
// return clientconfig object back
}
datamapconfig.go
Similar style I have for datamapconfig too. Exactly replica of clientconfig.go file but operating on different config file name and will return DataMapConfig struct back.
processdataConfig.go
Same thing as clientconfig.go file. Only difference is it will operate on different config file and return ProcessDataConfig struct back.
Problem Statement
I am looking for ideas where this above design can be improved? Is there any better way to do this in golang? Can we use interface or anything else which can improve the above design?
If I have let's say 10 different files instead of 3, then do I need to keep doing above same thing for remaining 7 files? If yes, then the code will look ugly. Any suggestions or ideas will greatly help me.
Update
Everything looks good but I have few questions as I am confuse on how can I achieve those with your current suggestion. On majority of my configs, your suggestion is perfect but there are two cases on two different configs where I am confuse on how to do it.
Case 1 After deserializing json into original struct which matches json format, I make another different struct after massaging that data and then I return that struct back.
Case 2 All my configs have one file but there are few configs which have multiple files in them and the number isn't fixed. So I pass regex file name and then I find all the files starting with that regex and then loop over all those files one by one. After deserializing each json file, I start populating another object and keep populating it until all files have been deserialized and then make a new struct with those objects and then return it.
Example of above scenarios:
Sample case 1
package config
import (
"encoding/json"
"fmt"
"io/ioutil"
"github.com/david/internal/utilities"
)
type CustomerManifest struct {
CustomerManifest map[int64]Site
}
type CustomerConfigs struct {
CustomerConfigurations []Site `json:"customerConfigurations"`
}
type Site struct {
....
....
}
const (
CustomerConfigFile = "abc.json"
)
func ReadCustomerConfig(path string, logger log.Logger) (*CustomerManifest, error) {
// I try to find all the files with my below utility method.
// Work with single file name and also with regex name
files, err := utilities.FindFiles(path, CustomerConfigFile)
if err != nil {
return nil, err
}
var customerConfig CustomerConfigs
// there is only file for this config so loop will run once
for _, file := range files {
body, err := ioutil.ReadFile(file)
if err != nil {
return nil, err
}
err = json.Unmarshal(body, &customerConfig)
if err != nil {
return nil, err
}
}
customerConfigIndex := BuildIndex(customerConfig, logger)
return &CustomerManifest{CustomerManifest: customerConfigIndex}, nil
}
func BuildIndex(customerConfig CustomerConfigs, logger log.Logger) map[int64]Site {
...
...
}
As you can see above in sample case 1, I am making CustomerManifest struct from CustomerConfigs struct and then return it instead of returning CustomerConfigs directly.
Sample case 2
package config
import (
"encoding/json"
"fmt"
"io/ioutil"
"github.com/david/internal/utilities"
)
type StateManifest struct {
NotionTemplates NotionTemplates
NotionIndex map[int64]NotionTemplates
}
type NotionMapConfigs struct {
NotionTemplates []NotionTemplates `json:"notionTemplates"`
...
}
const (
// there are many files starting with "state-", it's not fixed number
StateConfigFile = "state-*.json"
)
func ReadStateConfig(path string, logger log.Logger) (*StateManifest, error) {
// I try to find all the files with my below utility method.
// Work with single file name and also with regex name
files, err := utilities.FindFiles(path, StateConfigFile)
if err != nil {
return nil, err
}
var defaultTemp NotionTemplates
var idx = map[int64]NotionTemplates{}
// there are lot of config files for this config so loop will run multiple times
for _, file := range files {
var notionMapConfig NotionMapConfigs
body, err := ioutil.ReadFile(file)
if err != nil {
return nil, err
}
err = json.Unmarshal(body, &notionMapConfig)
if err != nil {
return nil, err
}
for _, tt := range notionMapConfig.NotionTemplates {
if tt.IsProcess {
defaultTemp = tt
} else if tt.ClientId > 0 {
idx[tt.ClientId] = tt
}
}
}
stateManifest := StateManifest{
NotionTemplates: defaultTemp,
NotionIndex: idx,
}
return &stateManifest, nil
}
As you can see above in my both the cases, I am making another different struct after deserializing is done and then I return that struct back but as of now in your current suggestion I think I won't be able to do this generically because for each config I do different type of massaging and then return those struct back. Is there any way to achieve above functionality with your current suggestion? Basically for each config if I want to do some massaging, then I should be able to do it and return new modified struct back but for some cases if I don't want to do any massaging then I can return direct deserialize json struct back. Can this be done generically?
Since there are config which has multiple files in them so that is why I was using my utilities.FindFiles method to give me all files basis on file name or regex name and then I loop over all those files to either return original struct back or return new struct back after massaging original struct data.
You can use a common function to load all the configuration files.
Assume you have config structures:
type Config1 struct {...}
type Config2 struct {...}
type Config3 struct {...}
You define configuration validators for those who need it:
func (c Config1) Validate() error {...}
func (c Config2) Validate() error {...}
Note that these implement a Validatable interface:
type Validatable interface {
Validate() error
}
There is one config type that includes all these configurations:
type Config struct {
C1 Config1
C2 Config2
C3 Config3
...
}
Then, you can define a simple configuration loader function:
func LoadConfig(fname string, out interface{}) error {
data, err:=ioutil.ReadFile(fname)
if err!=nil {
return err
}
if err:=json.Unmarshal(data,out); err!=nil {
return err
}
// Validate the config if necessary
if validator, ok:=out.(Validatable); ok {
if err:=validator.Validate(); err!=nil {
return err
}
}
return nil
}
Then, you can load the files:
var c Config
if err:=LoadConfig("file1",&c.C1); err!=nil {
return err
}
if err:=LoadConfig("file2",&c.C2); err!=nil {
return err
}
...
If there are multiple files loading different parts of the same struct, you can do:
LoadConfig("file1",&c.C3)
LoadConfig("file2",&c.C3)
...
You can simplify this further by defining a slice:
type cfgInfo struct {
fileName string
getCfg func(*Config) interface{}
}
var configs=[]cfgInfo {
{
fileName: "file1",
getCfg: func(c *Config) interface{} {return &c.C1},
},
{
fileName: "file2",
getCfg: func(c *Config) interface{} {return &c.C2},
},
{
fileName: "file3",
getCfg: func(c *Config) interface{} {return &c.C3},
},
...
}
func loadConfigs(cfg *Config) error {
for _,f:=range configs {
if err:=loadConfig(f.fileName,f.getCfg(cfg)); err!=nil {
return err
}
}
return nil
}
Then, loadConfigs would load all the configuration files into cfg.
func main() {
var cfg Config
if err:=loadConfigs(&cfg); err!=nil {
panic(err)
}
...
}
Any configuration that doesn't match this pattern can be dealt with using LoadConfig:
var customConfig1 CustomConfigStruct1
if err:=LoadConfig("customConfigFile1",&customConfig1); err!=nil {
panic(err)
}
cfg.CustomConfig1 = processCustomConfig1(customConfig1)
var customConfig2 CustomConfigStruct2
if err:=LoadConfig("customConfigFile2",&customConfig2); err!=nil {
panic(err)
}
cfg.CustomConfig2 = processCustomConfig2(customConfig2)

Uber Zap logger function name in logs

How to get function name printed in logs from Uber Zap logging ?
This is the PR request with which they seemed to have added the functionality to output function names in log.
I am using golang version 1.15 and go.uber.org/zap v1.16.0
This is my code:
package main
import (
"go.uber.org/zap"
)
var logger *zap.Logger
func main() {
logger := NewLogger()
logger.Info("Test msg Main")
TestFunc(logger)
}
func TestFunc(logger *zap.Logger) {
logger.Info("Test msg TestFunc")
}
func NewLogger() *zap.Logger {
config := zap.NewDevelopmentConfig()
opts := []zap.Option{
zap.AddCallerSkip(1), // traverse call depth for more useful log lines
zap.AddCaller(),
}
logger, _ = config.Build(opts...)
return logger
}
This is the output I get with/without the addition of AddCaller() option
2021-03-01T15:00:02.927-0800 INFO runtime/proc.go:204 Test msg Main
2021-03-01T15:00:02.927-0800 INFO cmd/main.go:12 Test msg TestFunc
I am expecting something like
2021-03-01T15:00:02.927-0800 INFO runtime/proc.go:204 main Test msg Main
2021-03-01T15:00:02.927-0800 INFO cmd/main.go:12 TestFunc Test msg TestFunc
By default, the provided encoder presets (NewDevelopmentEncoderConfig used by NewDevelopmentConfig and NewProductionEncoderConfig used by NewProductionConfig) do not enable function name logging.
To enable function name, you need to enable caller (true by default) and set a non-empty value for config.EncoderConfig.FunctionKey.
Source: EncoderConfig
type EncoderConfig struct {
// Set the keys used for each log entry. If any key is empty, that portion
// of the entry is omitted.
...
CallerKey string `json:"callerKey" yaml:"callerKey"`
FunctionKey string `json:"functionKey" yaml:"functionKey"` // this needs to be set
StacktraceKey string `json:"stacktraceKey" yaml:"stacktraceKey"`
...
}
Example Console Logger:
func main() {
config := zap.NewDevelopmentConfig()
// if you're using console encoding, the FunctionKey value can be any
// non-empty string because console encoding does not print the key.
config.EncoderConfig.FunctionKey = "F"
logger, _ := config.Build()
logger.Info("Test Logging")
// Output: 2021-03-03T11:41:47.728+0800 INFO example/main.go:11 main.main Test Logging
}
Example JSON Logger:
func main() {
config := zap.NewProductionConfig()
// the FunctionKey value matters because it will become the JSON field
config.EncoderConfig.FunctionKey = "func"
logger, _ := config.Build()
log(logger)
// Output: {"level":"info","ts":1614743088.538128,"caller":"example/main.go:15","func":"main.log","msg":"Test Logging"}
}
func log(logger *zap.Logger) {
logger.Info("Test Logging")
}

Using a logger that is initialized in main in other parts of codebase

I am struggling with how to use a zap. In the docs, they provide a few examples on how to configure a logger and how to use a preset. However, I do not understand how one is supposed to use the logger that is initialized in main.go, which lives in cmd/server/server.go. According to this SO post, and several others, one cannot import from the main package outside of the package. So, based on the Zap examples, how am I supposed to use the logger in say pkg/endpoint/my_requests (pkg is at same level as cmd)? I have not been able to find any explicit examples (even unrelated to zap) on how to accomplish something like this; yet, I am certain this is a very simple question.
I personally favor treating it as any other normal dependency and pass it where it is need it
package foo
type Bar struct {
Logger logger.Logger
}
func (b *Bar) Something() {
b.Logger.Debug("starting something")
}
func DoSomething(logger logger.Logger) {
b := Bar{Logger: logger}
b.Something()
}
Anything that involves an init function is basically a global variable
I also tend to use an abstraction over any logger that I use, and give the zero value a no-op behavior (doesn't log anything), especially helpful while testing, the downside it's a bit slower since all method are not a pointer receiver and require a copy and that I have to define the same methods (in fact I abstract the sugared version so I don't import zap on my packages)
package logger
import "go.uber.org/zap"
type Logger struct {
zap *zap.Logger
}
func Must(logger *Logger, err error) *Logger {
if err != nil {
panic(err)
}
return logger
}
func NewLogger(logFile string) (*Logger, error) {
zap.NewProductionConfig()
config := zap.NewProductionConfig()
config.OutputPaths = []string{"stdout", "./logs/" + logFile}
logger, err := config.Build(zap.AddCaller())
if err != nil {
return nil, err
}
return &Logger{zap: logger}, err
}
func (l Logger) Debug(msg string, fields ...zap.Field) {
l.writer().Debug(msg, fields...)
}
func (l Logger) Info(msg string, fields ...zap.Field) {
l.writer().Info(msg, fields...)
}
// define all the methdos
var noOpLogger = zap.NewNop()
func (l Logger) writer() *zap.Logger {
if l.zap == nil {
return noOpLogger
}
return l.zap
}
The zero value no-op logger is safe for concurrent use and the logger doesn't get in the middle anymore during testing
var b Bar
b.Something() // no panics
A good idea would be to create a package responsible for the logger or just a package like "config" or "settings" to handle the global configurations like logger.
I do use zap in my projects and I usually have a package named logger which provides a base method called NewLogger which I call from my other packages to create a package-specific logger when my packages generate a lot of logs and they are pretty big. In rather smaller projects, I just initialize the logger in the logger package and call it from outside.
// /my-project/pkg/logging/logging.go
func NewLogger(logFile string) *zap.Logger {
zap.NewProductionConfig()
config := zap.NewProductionConfig()
config.OutputPaths = []string{"stdout", "./logs/" + logFile}
logger, err := config.Build(zap.AddCaller())
if err != nil {
panic(err)
}
return logger
}
And then use it in other packages:
// /my-project/pkg/a/a.go
package a
var logger *zap.Logger
func init() {
logger = logging.NewLogger("a.log")
}
func MyFunction() {
logger.Info("log from package a to a.log")
}
another package:
// /my-project/pkg/b/b.go
package b
var logger *zap.Logger
func init() {
logger = logging.NewLogger("b.log")
}
func MyFunction() {
logger.Info("log from package b to b.log")
}
You can also initialize the logger directly in your packages but having your logger in a separate package helps you change the configuration or the entire logger whenever you want without the need to make changes everywhere.
You can also add more methods and helpers to your logger package to unify your logs or just make your life easier for logging stuff.

Use request ID in all logger

I've some web-application server using go http and I want that each request will have context with uuid, for this I can use http request context https://golang.org/pkg/net/http/#Request.Context
we are using logrus and we initiate it in one file and use the logger instance in other files.
what I need is to print request ID in all the logs but not to add new paremeters to each log print, I want do to it once in each http request (pass the req-id) and all the logs print will have it without doing anything with it
e.g. if the id=123 so log.info("foo") will print
// id=123 foo
I've tried with the following but not sure it's the right way, please advice.
package main
import (
"context"
"errors"
log "github.com/sirupsen/logrus"
)
type someContextKey string
var (
keyA = someContextKey("a")
keyB = someContextKey("b")
)
func main() {
ctx := context.Background()
ctx = context.WithValue(ctx, keyA, "foo")
ctx = context.WithValue(ctx, keyB, "bar")
logger(ctx, nil).Info("did nothing")
err := errors.New("an error")
logger(ctx, err).Fatal("unrecoverable error")
}
func logger(ctx context.Context, err error) *log.Entry {
entry := log.WithField("component", "main")
entry = entry.WithField("ReqID", "myRequestID")
return entry
}
https://play.golang.org/p/oCW09UhTjZ5
Every time you call the logger function you are creating a new *log.Entry and writing the request ID to it again. From your question it sounded like you do not want that.
func main() {
ctx := context.Background()
ctx = context.WithValue(ctx, keyA, "foo")
ctx = context.WithValue(ctx, keyB, "bar")
lg := logger(ctx)
lg.Info("did nothing")
err := errors.New("an error")
lg.WithError(err).Fatal("unrecoverable error")
}
func logger(ctx context.Context) *log.Entry {
entry := log.WithField("component", "main")
entry = entry.WithField("ReqID", "myRequestID")
return entry
}
The downside of this is that you will have to pass the lg variable to every function this request calls and which should also log the request ID.
What we did at our company is create a thin layer around logrus that has an additional method WithRequestCtx so we could pass in the request context and it would extract the request ID itself (which we had written to the context in a middleware). If no request ID was present nothing was added to the log entry. This however did add the request ID to every log entry again as your sample code also did.
Note: our thin layer around logrus had a lot more functionality and default settings to justify the extra effort. In the long run this turned out very helpful to have one place to be able to adjust logging for all our services.
Note2: meanwhile we are in the process of replacing logrus with zerolog to be more lightweight.
Late answer but all you need to do is just call logrus.WithContext(/* your *http.Request.Context() goes here*/).... in your application and logrus will automatically add "id":"SOME-UUID" to each logs. Design is flexible for extracting more key-value from request context if you wanted to.
initialise logger
package main
import (
"path/to/logger"
"path/to/request"
)
func main() {
err := logger.Setup(logger.Config{
ContextFields: map[string]interface{}{
string(request.CtxIDKey): request.CtxIDKey,
}
})
if err != nil {
// ...
}
}
logger
package logger
import (
"github.com/sirupsen/logrus"
)
type Config struct {
Level string
ContextFields map[string]interface{}
}
func Setup(config Config) error {
lev, err := logrus.ParseLevel(config.Level)
if err != nil {
return err
}
logrus.SetLevel(lev)
return nil
}
func (c Config) Fire(e *logrus.Entry) error {
for k, v := range c.StaticFields {
e.Data[k] = v
}
if e.Context != nil {
for k, v := range c.ContextFields {
if e.Context.Value(v) != nil {
e.Data[k] = e.Context.Value(v).(string)
}
}
}
return nil
}
request
package request
import (
"context"
"net/http"
"github.com/google/uuid"
)
type ctxID string
const CtxIDKey = ctxID("id")
func ID(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
h.ServeHTTP(w, r.WithContext(context.WithValue(r.Context(), CtxIDKey, uuid.New().String())))
})
}

How to use Sentry with go.uber.org/zap/zapcore logger

I am using go.uber.org/zap/zapcore for logging in my Go app.
package logger
import (
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
"log"
)
var l *zap.Logger
func Get() *zap.Logger {
return l
}
func Init() {
conf := zap.NewProductionConfig()
logger, err := conf.Build()
if err != nil {
log.Fatal("Init logger failed", err)
}
l = logger
}
I also have Sentry project and use github.com/getsentry/raven-go.
I want to send logs at error level and above to Sentry.
For example when logging at info level with logger.Info() I want to just log them as usual, but in case of error or fatal logs I need send these messages to Sentry. How can I achieve that?
The answer is you should use zap wrapper for adding hooks then you have to use the function of logger which is called WithOptions
sentryOptions := zap.WrapCore(func(core zapcore.Core) zapcore.Core {
return zapcore.RegisterHooks(core, func(entry zapcore.Entry) error {
// your logic here
})
})
logger.WithOptions(sentryOptions)
The following will capture the message and send it to the sentry when an error level is detected, with customized error line number and message.
err := sentry.Init(sentry.ClientOptions{Dsn: "http://~~~~~"})
if err != nil {
log.fatal("Sentry Error Setup ::", err.Error())
}
logger, _ := zap.NewDevelopment(zap.Hooks(func(entry zapcore.Entry) error {
if entry.Level == zapcore.ErrorLevel {
defer sentry.Flush(2 * time.Second)
sentry.CaptureMessage(fmt.Sprintf("%s, Line No: %d :: %s", entry.Caller.File, entry.Caller.Line, entry.Message))
}
return nil
}))
sugar := logger.Sugar()

Resources