Logrus hooks with syslog by demand - go

I'm using golang logrus for logging and I'm having a wrapper with all the regular functions like Info(..),Infof(..) e.t.c I want to implement a wrapper function Audit(..) for logging to syslog.
I noticed logrus syslog hooks problem is, once it got hooked every log function is logging to syslog, also Infof(..) which I don't want them to.
Is there a way I can call syslog by demand? other than:
func (l *WrapLogger) Audit(msg string) {
l.logger.AddHook(syslogHook)
l.logger.Info(msg)
l.logger.ReplaceHooks(logrus.LevelHooks) // removing somehow the hook
}
Thanks

If you're trying to delegate what message to send by its log level then you can do it by setting the log levels the hook accepts.
For example:
log.AddHook(&writer.Hook{
Writer: os.Stderr,
LogLevels: []log.Level{ log.WarnLevel },
})
log.AddHook(lSyslog.NewSyslogHook("udp", "localhost:514", syslog.LOG_INFO, ""))
log.Info("This will go to syslog")
log.Warn("This will go to stderr")
If you want to route this no according to the log level then what you suggested may work but it feels odd and may have race-conditions.
What I suggest you to do is create your own hook that gets hook list and route to the right hook(s) according to the message or to the fields that are passed when calling Info, Warn and etc.

Related

Masstransit EndpointConvention Azure Service Bus

I'm wondering if I'm doing something wrong, I expected MassTransit would automatically register ReceiveEndpoints in the EndpointConvention.
Sample code:
services.AddMassTransit(x =>
{
x.AddServiceBusMessageScheduler();
x.AddConsumersFromNamespaceContaining<MyNamespace.MyRequestConsumer>();
x.UsingAzureServiceBus((context, cfg) =>
{
// Load the connection string from the configuration.
cfg.Host(context.GetRequiredService<IConfiguration>().GetValue<string>("ServiceBus:ConnectionString"));
cfg.UseServiceBusMessageScheduler();
// Without this line I'm getting an error complaining about no endpoint convention for x could be found.
EndpointConvention.Map<MyRequest>(new Uri("queue:queue-name"));
cfg.ReceiveEndpoint("queue-name", e =>
{
e.MaxConcurrentCalls = 1;
e.ConfigureConsumer<MyRequestConsumer>(context);
});
cfg.ConfigureEndpoints(context);
});
});
I thought this line EndpointConvention.Map<MyRequest>(new Uri("queue:queue-name")); wouldn't be necessary to allow sending to the bus without specifing the queue name, or am I missing something?
await bus.Send<MyRequest>(new { ...});
The EndpointConvention is a convenience method that allows the use of Send without specifying the endpoint address. There is nothing in MassTransit that will automatically configured this because, frankly, I don't use it. And I don't think anyone else should either. That stated, people do use it for whatever reason.
First, think about the ramifications - if every message type was registered as an endpoint convention, what about messages that are published and consumed on multiple endpoints? That wouldn't work.
So, if you want to route messages by message type, MassTransit has a feature for that. It's called Publish and it works great.
But wait, it's a command, and commands should be Sent.
That is true, however, if you are in control of the application and you know that there is only one consumer in your code base that consumes the KickTheTiresAndLightTheFires message contract, publish is as good as send and you don't need to know the address!
No, seriously dude, I want to use Send!
Okay, fine, here are the details. When using ConfigureEndpoints(), MassTransit uses the IEndpointNameFormatter to generate the receive endpoint queue names based upon the types registered via AddConsumer, AddSagaStateMachine, etc. and that same interface can be used to register your own endpoint conventions if you want to use Send without specifying a destination address.
You are, of course, coupling the knowledge of your consumer and message types, but that's your call. You're already dealing with magic (by using Send without an explicit destination) so why not right?
string queueName = formatter.Consumer<T>()
Use that string for the message types in that consumer as a $"queue:{queueName}" address and register it on the EndpointConvention.
Or, you know, just use Publish.

Proper logging implementation in Golang package

I have small Golang package which does some work. This work suppose a high amount of errors could be produced and this is OK. Currently all errors are ignored. Yes it may look strange, but visit the link and check the main purpose of package.
I'd like to extend functionality of the package and provide ability to see errors occurred during runtime. But due to lack of software design skills I have some questions with no answers.
At first, I thought to implement logging inside the package using the existing logging (zerolog, zap or whatever else). But, will it be ok for package's users? Because they might want to use other logging packages and would like to modify output format.
Maybe it's possible to provide a way to user to inject it's own logging?
I'd like to achieve the ability to provide easy-configurable way for logging which could be switched on or off on users demands.
Some go lib use logging like this
in your packge definite a logger interface
type Yourlogging interface{
Errorf(...)
Warningf(...)
Infof(...)
Debugf(...)
}
and definite a variable for this interface
var mylogger Yourlogging
func SetLogger(l yourlogging)error{
mylogger = l
}
in your func, you can call them for logging
mylogger.Infof(..)
mylogger.Errorf(...)
you don't need implement the interface, but you can use them who implement this interface
for example:
SetLogger(os.Stdout) //logging output to stdout
SetLogger(logrus.New()) // logging output to logrus (github.com/sirupsen/logrus)
In Go, you will see some libraries implement logging interfaces like other answers have suggested. However, you could completely avoid your packages needing to log if you structured your application differently, for your example.
For example, in your example application you linked, your main application runtime calls idleexacts.Run(), which starts this function.
// startLoop starts workload using passed settings and database connection.
func startLoop(ctx context.Context, log log.Logger, pool db.DB, tables []string, jobs uint16, minTime, maxTime time.Duration) error {
rand.Seed(time.Now().UnixNano())
// Increment maxTime up to 1 due to rand.Int63n() never return max value.
maxTime++
// While running, keep required number of workers using channel.
// Run new workers only until there is any free slot.
guard := make(chan struct{}, jobs)
for {
select {
// Run workers only when it's possible to write into channel (channel is limited by number of jobs).
case guard <- struct{}{}:
go func() {
table := selectRandomTable(tables)
naptime := time.Duration(rand.Int63n(maxTime.Nanoseconds()-minTime.Nanoseconds()) + minTime.Nanoseconds())
err := startSingleIdleXact(ctx, pool, table, naptime)
if err != nil {
log.Warnf("start idle xact failed: %s", err)
}
// When worker finishes, read from the channel to allow starting another worker.
<-guard
}()
case <-ctx.Done():
return nil
}
}
}
The problem here is all of the orchestration of your logic is happening inside of your packages. Instead, this loop should be running in your main application, and this package should provide users with simple actions such as selectRandomTable() or createTempTable().
If the orchestration of code was in your main application and the package only provided simple actions. It would be much easier to return errors to the user as part of the function calls.
It would also make your packages easier for others to reuse because they have simple actions and open users to use them in other ways than you intended.

Is there a way to output sentry messages to the console

I'm working on a suite of microservices writting in golang. I have a demo in a couple of months, and by next year these services should be in production. For now, I'm just hashing out all the basics and boilerplate, including calls to sentry.
All of the services make several async requests that set several processes in motion. If one thing fails, I don't want to panic or return; I want to continue execution, but I want to be able to go back and see what happened.
While developing, I don't really want to send anything to Sentry, but I want to see what the output to Sentry would be so I can make sure that the messages, breadcrumbs, stacktraces etc are all being captured as intended. Is anything like this possible? I tried running the local server but it's quite bloated and it fired up about 20 docker containers and consumed a LOT of memory. Just looking for something lightweight so I can see what's going on.
I came up with a solution -- the output is very verbose, but it's exactly what I was looking for (for now). I simply provided my own transport implementation and passed it in to ClientOptions:
type consoleTransport struct{}
func (t *consoleTransport) Configure(options sentry.ClientOptions) {
zap.L().Info("Sentry client initialized with an empty DSN. Using consoleTransport. No events will be delivered.")
}
func (t *consoleTransport) SendEvent(event *sentry.Event) {
b, _ := json.Marshal(event)
fmt.Println("[SENTRY CONSOLE] " + string(b))
}
func (t *consoleTransport) Flush(_ time.Duration) bool {
return true
}

Golang Logging with Mapped Diagnostic Context

How can I achieve MDC Logging (Java) in GoLang?
I need to add UUIDs in all server logs in order to be able to trace concurrent requests.
Java MDC relies on thread local storage, something Go does not have.
The closest thing is to thread a Context through your stack.
This is what more and more libraries are doing in Go.
A somewhat typical way is to do this via a middleware package that adds a request id to the context of a web request, like:
req = req.WithContext(context.WithValue(req.Context(),"requestId",ID))
Then, assuming you pass the context around, you pull it out with ctx.Value("requestId") and use it wherever it makes sense.
Possibly making your own custom logger function like:
func logStuff(ctx context.Context, msg string) {
log.Println(ctx.Value("requestId"),msg) // call stdlib logger
}
There's a bunch of ways you may want to handle this, but that's a fairly simple form.

How to disable go_collector metrics in prometheus/client_golang

I am using a NewGaugeVec to report my metrics:
elapsed := prometheus.NewGaugeVec(prometheus.GaugeOpts{
Name: "gogrinder_elapsed_ms",
Help: "Current time elapsed of gogrinder teststep",
}, []string{"teststep", "user", "iteration", "timestamp"})
prometheus.MustRegister(elapsed)
All works fine but I noticed that my custom exporter contains all metrics from prometheus/go_collector.go:
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0.00041795300000000004
go_gc_duration_seconds{quantile="0.25"} 0.00041795300000000004
go_gc_duration_seconds{quantile="0.5"} 0.00041795300000000004
...
I suspect that this is kind of a default behavior but I did not find anything in the documentation on how to disable that. Any ideas on how to configure my custom exporter so that these default metrics disappear?
Well the topic is rather old but in case others have to deal with it.
The following code works fine with current codebase v0.9.0-pre1
// [...] imports, metric initialization ...
func main() {
// go get rid of any additional metrics
// we have to expose our metrics with a custom registry
r := prometheus.NewRegistry()
r.MustRegister(myMetrics)
handler := promhttp.HandlerFor(r, promhttp.HandlerOpts{})
// [...] update metrics within a goroutine
http.Handle("/metrics", handler)
log.Fatal(http.ListenAndServe(":12345", nil))
}
I would simply do it this way ->
// Register your collectors
elapsed := prometheus.NewGaugeVec(prometheus.GaugeOpts{
Name: "gogrinder_elapsed_ms",
Help: "Current time elapsed of gogrinder teststep",
}, []string{"teststep", "user", "iteration", "timestamp"})
prometheus.MustRegister(elapsed)
// Remove Go collector
prometheus.Unregister(prometheus.NewGoCollector())
This solution worked from me. Idea is to create a custom registry and register our metrics. Making sure we pass False in handler options for open metrics will disable those default metrics
var httpDuration = prometheus.NewHistogramVec(prometheus.HistogramOpts{
Name: "golang_api_http_duration_seconds",
Help: "Duration of HTTP requests.",
}, []string{"path", "host"})
promReg := prometheus.NewRegistry()
promReg.MustRegister(httpDuration)
handler := promhttp.HandlerFor(
promReg,
promhttp.HandlerOpts{
EnableOpenMetrics: false,
})
http.Handle("/metrics", handler)
log.Fatal(http.ListenAndServe(":12345", nil))
This is not currently possible in the Go client, once https://github.com/prometheus/client_golang/issues/46 is complete you'll have a way to do this.
In general you want your custom exporter to export these, the only ones I'm aware of where it doesn't currently make sense are the snmp and blackbox exporter.
Incidentally timestamp seems odd as a label, if you want that you should likely be using logging rather than metrics. See https://blog.raintank.io/logs-and-metrics-and-graphs-oh-my/
The Prometheus way would be to have the timestamp as a value, not as a label.
It's not really helpful as an answer to say "you'd have to go and do it yourself" but it seems like the only option for now.
Since Prometheus is open source and if you really need to do that; I believe you'd have to fork this one go_collector.go line #28 and the related sections, or better yet modify it to make all those metrics optional and make a PR so other people may also benefit from that in the future.
you can use --web.disable-exporter-metrics now.
https://github.com/prometheus/node_exporter/pull/1148

Resources