Is there a way to output sentry messages to the console - go

I'm working on a suite of microservices writting in golang. I have a demo in a couple of months, and by next year these services should be in production. For now, I'm just hashing out all the basics and boilerplate, including calls to sentry.
All of the services make several async requests that set several processes in motion. If one thing fails, I don't want to panic or return; I want to continue execution, but I want to be able to go back and see what happened.
While developing, I don't really want to send anything to Sentry, but I want to see what the output to Sentry would be so I can make sure that the messages, breadcrumbs, stacktraces etc are all being captured as intended. Is anything like this possible? I tried running the local server but it's quite bloated and it fired up about 20 docker containers and consumed a LOT of memory. Just looking for something lightweight so I can see what's going on.

I came up with a solution -- the output is very verbose, but it's exactly what I was looking for (for now). I simply provided my own transport implementation and passed it in to ClientOptions:
type consoleTransport struct{}
func (t *consoleTransport) Configure(options sentry.ClientOptions) {
zap.L().Info("Sentry client initialized with an empty DSN. Using consoleTransport. No events will be delivered.")
}
func (t *consoleTransport) SendEvent(event *sentry.Event) {
b, _ := json.Marshal(event)
fmt.Println("[SENTRY CONSOLE] " + string(b))
}
func (t *consoleTransport) Flush(_ time.Duration) bool {
return true
}

Related

Proper logging implementation in Golang package

I have small Golang package which does some work. This work suppose a high amount of errors could be produced and this is OK. Currently all errors are ignored. Yes it may look strange, but visit the link and check the main purpose of package.
I'd like to extend functionality of the package and provide ability to see errors occurred during runtime. But due to lack of software design skills I have some questions with no answers.
At first, I thought to implement logging inside the package using the existing logging (zerolog, zap or whatever else). But, will it be ok for package's users? Because they might want to use other logging packages and would like to modify output format.
Maybe it's possible to provide a way to user to inject it's own logging?
I'd like to achieve the ability to provide easy-configurable way for logging which could be switched on or off on users demands.
Some go lib use logging like this
in your packge definite a logger interface
type Yourlogging interface{
Errorf(...)
Warningf(...)
Infof(...)
Debugf(...)
}
and definite a variable for this interface
var mylogger Yourlogging
func SetLogger(l yourlogging)error{
mylogger = l
}
in your func, you can call them for logging
mylogger.Infof(..)
mylogger.Errorf(...)
you don't need implement the interface, but you can use them who implement this interface
for example:
SetLogger(os.Stdout) //logging output to stdout
SetLogger(logrus.New()) // logging output to logrus (github.com/sirupsen/logrus)
In Go, you will see some libraries implement logging interfaces like other answers have suggested. However, you could completely avoid your packages needing to log if you structured your application differently, for your example.
For example, in your example application you linked, your main application runtime calls idleexacts.Run(), which starts this function.
// startLoop starts workload using passed settings and database connection.
func startLoop(ctx context.Context, log log.Logger, pool db.DB, tables []string, jobs uint16, minTime, maxTime time.Duration) error {
rand.Seed(time.Now().UnixNano())
// Increment maxTime up to 1 due to rand.Int63n() never return max value.
maxTime++
// While running, keep required number of workers using channel.
// Run new workers only until there is any free slot.
guard := make(chan struct{}, jobs)
for {
select {
// Run workers only when it's possible to write into channel (channel is limited by number of jobs).
case guard <- struct{}{}:
go func() {
table := selectRandomTable(tables)
naptime := time.Duration(rand.Int63n(maxTime.Nanoseconds()-minTime.Nanoseconds()) + minTime.Nanoseconds())
err := startSingleIdleXact(ctx, pool, table, naptime)
if err != nil {
log.Warnf("start idle xact failed: %s", err)
}
// When worker finishes, read from the channel to allow starting another worker.
<-guard
}()
case <-ctx.Done():
return nil
}
}
}
The problem here is all of the orchestration of your logic is happening inside of your packages. Instead, this loop should be running in your main application, and this package should provide users with simple actions such as selectRandomTable() or createTempTable().
If the orchestration of code was in your main application and the package only provided simple actions. It would be much easier to return errors to the user as part of the function calls.
It would also make your packages easier for others to reuse because they have simple actions and open users to use them in other ways than you intended.

How do I run another executable from a Windows service

Besides a few tutorials on Go I have no actual experience in it. I'm trying to take a project written in Go and converting it into a windows service.
I honestly haven't tried anything besides trying to find things to read over. I have found a few threads and choosen the best library I felt covered all of our needs
https://github.com/golang/sys
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build windows
package main
import (
"fmt"
"strings"
"time"
"golang.org/x/sys/windows/svc"
"golang.org/x/sys/windows/svc/debug"
"golang.org/x/sys/windows/svc/eventlog"
)
var elog debug.Log
type myservice struct{}
func (m *myservice) Execute(args []string, r <-chan svc.ChangeRequest, changes chan<- svc.Status) (ssec bool, errno uint32) {
const cmdsAccepted = svc.AcceptStop | svc.AcceptShutdown | svc.AcceptPauseAndContinue
changes <- svc.Status{State: svc.StartPending}
fasttick := time.Tick(500 * time.Millisecond)
slowtick := time.Tick(2 * time.Second)
tick := fasttick
changes <- svc.Status{State: svc.Running, Accepts: cmdsAccepted}
loop:
for {
select {
case <-tick:
beep()
elog.Info(1, "beep")
case c := <-r:
switch c.Cmd {
case svc.Interrogate:
changes <- c.CurrentStatus
// Testing deadlock from https://code.google.com/p/winsvc/issues/detail?id=4
time.Sleep(100 * time.Millisecond)
changes <- c.CurrentStatus
case svc.Stop, svc.Shutdown:
// golang.org/x/sys/windows/svc.TestExample is verifying this output.
testOutput := strings.Join(args, "-")
testOutput += fmt.Sprintf("-%d", c.Context)
elog.Info(1, testOutput)
break loop
case svc.Pause:
changes <- svc.Status{State: svc.Paused, Accepts: cmdsAccepted}
tick = slowtick
case svc.Continue:
changes <- svc.Status{State: svc.Running, Accepts: cmdsAccepted}
tick = fasttick
default:
elog.Error(1, fmt.Sprintf("unexpected control request #%d", c))
}
}
}
changes <- svc.Status{State: svc.StopPending}
return
}
func runService(name string, isDebug bool) {
var err error
if isDebug {
elog = debug.New(name)
} else {
elog, err = eventlog.Open(name)
if err != nil {
return
}
}
defer elog.Close()
elog.Info(1, fmt.Sprintf("starting %s service", name))
run := svc.Run
if isDebug {
run = debug.Run
}
err = run(name, &myservice{})
if err != nil {
elog.Error(1, fmt.Sprintf("%s service failed: %v", name, err))
return
}
elog.Info(1, fmt.Sprintf("%s service stopped", name))
}
So I spent some time going over this code. Tested it out to see what it does. It performs as it should.
The question I have is we currently have a Go program that takes in arguments and for our service we pass in server. Which spins up our stuff on a localhost webpage.
I believe the code above may have something to do with that but I'm lost at how I would actually get it spin off our exe with the correct arguements. Is this the right spot to call main?
Im sorry if this is vague. I dont know exactly how to make this interact with our already exisiting exe.
I can get that modified if I know what needs to be changed. I appreacite any help.
OK, that's much clearer now. Well, ideally you should start with some tutorial on what constitutes a Windows service—I bet tihis might have solved the problem for you. But let's try anyway.
Some theory
A Windows service sort of has two facets: it performs some useful task and it communicates with the SCM facility. When you manipulate a service using the sc command or through the Control Panel, you have that piece of software to talk with SCM on your behalf, and SCM talks with that service.
The exact protocol the SCM and a service use is low-level and complicated
and the point of the Go package you're using is to hide that complexity from you
and offer a reasonably Go-centric interface to that stuff.
As you might gather from your own example, the Execute method of the type you've created is—for the most part—concerned with communicating with SCM: it runs an endless for loop which on each iteration sleeps on reading from the r channel, and that channel delivers SCM commands to your service.
So you basically have what could be called "an SCM command processing loop".
Now recall those two facets above. You already have one of them: your service interacts with SCM, so you need another one—the code which actually performs useful tasks.
In fact, it's already partially there: the example code you've grabbed creates a time ticker which provides a channel on which it delivers a value when another tick passes. The for loop in the Execute method reads from that channel as well, "doing work" each time another tick is signalled.
OK, this is fine for a toy example but lame for a real work.
Approaching the solution
So let's pause for a moment and think about our requirements.
We need some code running and doing our actual task(s).
We need the existing command processing loop to continue working.
We need these two pieces of code to work concurrently.
In this toy example the 3rd point is there "for free" because a time ticker carries out the task of waiting for the next tick automatically and fully concurrently with the rest of the code.
Your real code most probably won't have that luxury, so what do you do?
In Go, when you need to do something concurrently with something else,
an obvious answer is "use a goroutine".
So the first step is to grab your existing code, turn it into a callable function
and then call it in a separate goroutine right before entering the for loop.
This way, you'll have both pieces run concurrently.
The hard parts
OK, that wasn't hard.
The hard parts are:
How to configure the code which performs the tasks.
How to make the SCM command processing loop and the code carrying out tasks communicate.
Configuration
This one really depends on the policies at your $dayjob or of your $current_project, but there are few hints:
A Windows service may receive command-line arguments—either for a single run or permanently (passed to the service on each of its runs).
The downside is that it's not convenient to work with them from the UI/UX standpoint.
Typically Windows services used to read the registry.
These days (after the advent of .NET and its pervasive xml-ity) the services tend to read configuration files.
The OS environment most of the time is a bad fit for the task.
You may combine several of these venues.
I think I'd start with a configuration file but then again, you should pick the path of the least resistance, I think.
One of the things to keep in mind is that the reading and processing of the configuration should better be done before the service signals the SCM it started OK: if the configuration is invalid or cannot be loaded, the service should extensively log that and signal it failed, and not run the actual task processing code.
Communication between the command processing loop and the tasks carrying code
This is IMO the hardest part.
It's possible to write a whole book here but let's keep it simple for now.
To make it as simple as possible I'd do the following:
Consider pausing, stopping and shutting down mostly the same: all these signals must tell your task processing code to quit, and then wait for it to actually do that.
Consider the "continue" signal the same as starting the task processing function: run it again—in a new goroutine.
Have a one-directional communication: from the control loop to the tasks processing code, but not the other way—this will greatly simplify service state management.
This way, you may create a single channel which the task processing code listens on—or checks periodically, and when a value comes from that channel, the code stops running, closes the channel and exits.
The control loop, when the SCM tells it to pause or stop or shut down, sends anything on that channel then waits for it to close. When that happens, it knows the tasks processing code is finished.
In Go, a paradigm for a channel which is only used for signaling, is to have a channel of type struct{} (an empty struct).
The question of how to monitor this control channel in the tasks running code is an open one and heavily depends on the nature of the tasks it performs.
Any further help here would be reciting what's written in the Go books on concurrency so you should have that covered first.
There's also an interesting question of how to have the communication between the control loop and the tasks processing loop resilient to the possible processing stalls in the latter, but then again, IMO it's too early to touch upon that.

How to recover from an asynchronous panic in external package

I'm learning Go and I'm trying to understand how to properly deal with panics from external packages.
Here is a runnable example, say a package defines the doFoo method. (It's located in the same package here for the sake of the example )
package main
import (
"log"
"net/http"
"sync"
"time"
"github.com/gorilla/handlers"
"github.com/gorilla/mux"
)
// Method from External package
func doFoo() {
var wg sync.WaitGroup
wg.Add(1)
// Do some cool async stuff
go func() {
time.Sleep(500)
wg.Done()
panic("Oops !")
}()
}
func router() *mux.Router {
var router = mux.NewRouter().StrictSlash(true)
router.HandleFunc("/doFoo", index).Methods("GET")
return router
}
func main() {
log.Fatal(http.ListenAndServe(":8080", handlers.RecoveryHandler()(router())))
}
func index(w http.ResponseWriter, r *http.Request) {
defer func() {
recover()
w.WriteHeader(http.StatusInternalServerError)
}()
doFoo()
w.WriteHeader(http.StatusOK)
}
Invoking the doFoo method will crash the server, I appreciate that is correct behavior, since the application is now in an undermined state. And it's best to crash and have subsequent requests forwarded to a different processes trough some load balancer.
But, my api server might still be serving other clients, it might be maintaining websockets, and I Might would also want to return a 500 error here.
Coming from nodejs, I am used to the concept of uncaughtException, for handeling uncaptured synchronous exceptions and unhandledRejection to handle uncaptured asynchronous exceptions. These two process constructs give the developer the choice to either crash the program right away ( if it makes sense ), or log the error, return a proper http code, and then maybe shutdown gracefully if needed.
In my online research I find a lot of resources saying, panic's are not like exceptions, they are unusual, you don't need to worry about them. But it seems like it's actually very easy to cause a panic when writing code. It's completely up to the developer to ensure his library does not panic, the human factor is 100% involved here.
This leads me to wonder, do I need to audit the entire code base of every single package I'm going to use, including all the package dependencies as well ? just because I have no means of safeguarding against a missed recover in some external package that will take down my whole server and ruin my user's experience ?
Or is there some strategy I am not aware of that I can fail gracefully when an asynchronous panic occurs in library code ?
I noticed there is graceful shutdown since 1.8, but I can't use this because my program has already crashed.
https://golang.org/pkg/net/http/#Server.Shutdown
There is the gorilla recovery handler, but again, this only protects against synchronous panics.
http://www.gorillatoolkit.org/pkg/handlers#RecoveryHandler
Update:
I am aware that panics are not exceptions. Restating that does not answer the question, panics and exceptions is not what this question is about. This question is about understanding what tools the language may provide to enforce boundaries without bestowing the need to read every single line in the entire package tree onto the developer. If it's not possible in the language then stating that is a valid answer. I just don't know if it is or not.
Panics are not exceptions. Do not treat them like exceptions and you will be fine.
First things first: Package APIs should never panic, they should always return an error except in certain very rare cases, and then they must be clearly documented as to when and why they can panic (regexp.MustCompile is a good example of something that may panic). Any package that panics if it hits an error (and doesn't have a very good reason to do so) is bad, don't use it.
If you do bounds checking, make sure not to acess nil pointers, etc you should never have to worry about panics.
As for recovering panics in a goroutine, unless the goroutine has its own recovery handler you can't.
If the goroutine in from a third party library, don't use that library! If they are lax enough not to check edge cases and/or are lazy enough to just panic on error, why are you using their code? Who knows what other mines it holds?
If the goroutine is your own code, try to eliminate things that can panic, then add a recovery handler to catch the ones you can't prevent if needed.

Go GC stopping my goroutine?

I have been trying to get into Go from the more traditional languages such as Java and C and so far I've been enjoying the well-thought out design choices that Go offers. When I started my first "real" project though, I ran into a problem that almost nobody seems to have.
My project is a simple networking implementation that sends and receives packets. The general structure is something like this (of course simplified):
A client manages the net.Conn with the server. This Clientcreates a PacketReaderand a PacketWriter. These both run infinite loops in a different goroutine. The PacketReader takes an interface with a single OnPacketReceived function that is implemented by the client.
The PacketReader code looks something like this:
go func() {
for {
bytes, err := reader.ReadBytes(10) // Blocks the current routine until bytes are available.
if err != nil {
panic(err) // Handle error
}
reader.handler.OnPacketReceived(reader.parseBytes(bytes))
}
}()
The PacketWriter code looks something like this:
go func() {
for {
if len(reader.packetQueue) > 0 {
// Write packet
}
}
}()
In order to make Client blocking, the client makes a channel that gets filled by OnPacketReceived, something like this:
type Client struct {
callbacks map[int]chan interface{}
// More fields
}
func (c *Client) OnPacketReceived(packet *Packet) {
c.callbacks[packet.Id] <- packet.Data
}
func (c *Client) SendDataBlocking(id int, data interface{}) interface{} {
c.PacketWriter.QueuePacket(data)
return <-c.callbacks[id]
}
Now here is my problem: the reader.parseBytes function does some intensive decoding operation that creates quite a lot of objects (to the point that the GC decides to run). The GC however, pauses the reader goroutine that is currently decoding the bytes, and then hangs. A problem that seems similar to mine is described here. I have confirmed that it is actually the GC causing it, because running it with GOGC=off runs successfully.
At this point, my 3 routines look like this:
- Client (main routine): Waiting for channel
- Writer: Still running, waiting for new data in queue
- Reader: Set as runnable, but not actually running
Somehow, the GC is either not able to stop all routines in order to run, or it does not resume said goroutines after it stopped them.
So my question is this: Is there any way to fix this? I am new to Go so I don't really know if my design choices are even remotely conventional, and I'm all up with changing the structure of my program. Do I need to change the way I handle packet reading callbacks, do I need to try and make the packet decoder less intensive? Thanks!
Edit: I am running Go 1.5.1, I'll try to get a working example later today.
As per mrd0ll4rs comment, changed the writer to use channels instead of a slice (I don't even know why I did that in the first place). That seemed to give the GC enough "mobility" to allow the threads to stop. Adding in the runtime.Gosched() and still using slices also worked though, but the channels seemed more "go-esque".

Overriding http.Server.Serve

I need to embed the default http.Server in my own server struct and customize the Serve method.
The server needs to short circuit the go c.serve() call and only run that line if it has the computing resources available to respond within 50ms. Otherwise the server is just going to send a 204 and move on.
This is almost straightforward.
type PragmaticServer struct {
http.Server
Addr string
Handler http.Handler
}
func (srv *PragmaticServer) Serve(l net.Listener) error {
defer l.Close()
var tempDelay time.Duration // how long to sleep on accept failure
for {
// SNIP for clarity
c, err := srv.newConn(rw)
if err != nil {
continue
}
c.setState(c.rwc, StateNew) // before Serve can return
go c.serve()
}
}
So, again. This almost works. Except that srv.newConn is an unexported method, as is c.serve and c.setState, which means that I end up having to copy and paste pretty much the entirety of net/http in order for this to compile. Which is basically a fork. Is there any better way to do this?
Unfortunately, you're not going to be able to do that without reimplementing most of the Server code. Short of that, we usually intercept the call either just before at conn.Accept, or just after at Handler.ServerHTTP.
The first method is to create a custom net.Listener that filters out connections before they are even handed off to the http.Server. While this can respond faster, and consume fewer resources, it however makes it less convenient to write http responses, and precludes you from limiting requests on already open connections.
The second way to handle this, is to just wrap the handlers and intercept the request before any real work has been done. You most likely want to create a http.Handler to filter the requests, and pass them through to your main handler. This can also be more flexible, since you can filter based on the route, or other request information if you so choose.

Resources