Programmatically create JUnit XML report from "go test" output - go

I am programmatically calling go test to run tests.
func executeAllTests(c *gin.Context) {
// Enable verbose
flag.Set("test.v", "true")
id := "myUniqueId"
// Asynchronously run the tests.
go runTestsAndWriteReport(id)
// Return
c.JSON(200, id)
}
func runTestsAndWriteReport(fileName string) {
testing.Main(
nil,
[]testing.InternalTest{
{"TestX", myTestPkg.TestX},
{"TestY", myTestPkg.TestY}
},
nil, nil,
)
// TODO: write output as JUnit XML to file "fileName"
}
I would like to write the test output in JUnit XML form to a file. There are frameworks, for example, gotestsum that can do this from the command line, however, I want to do it programmatically as shown above.
Any suggestion on how this can be done?

Related

how to do open api format yaml validation, without using clusters?

I have build a schema in open api format:
type Test_manifest struct {
metav1.TypeMeta
metav1.ObjectMeta
Spec spec
}
type spec struct {
Policies []string
Resources resources
Results []results
Variables variables
}
This is not complete schema, just a part of it.
And here below is the actual yaml file:
apiVersion: cli.kyverno.io/v1beta1
kind: kyvernotest
metadata:
name: test-check-nvidia-gpus
labels:
foolabel: foovalue
annotations:
fookey: foovalue
I'm trying to validate this incoming yaml file from the user, I can convert this yaml to json and then validate the value of the fields, but I'm not getting how to validate the field itself, I mean if user write name1 rather than name then how to show error on it. Basically how to validate the key.
Here's what I've implemented for value validation:
test := "cmd/cli/kubectl-kyverno/test/test.yaml"
yamlFile, err := ioutil.ReadFile(test)
if err != nil {
fmt.Printf("Error: failed to read file %v", err)
}
policyBytes, err1 := yaml.ToJSON(yamlFile)
if err1 != nil {
fmt.Printf("failed to convert to JSON")
}
tests := &kyvernov1.Test_manifest{}
if err := json.Unmarshal(policyBytes, tests); err != nil {
fmt.Printf("failed to decode yaml")
}
if tests.TypeMeta.APIVersion == "" {
fmt.Printf("skipping file as tests.TypeMeta.APIVersion not found")
}
if tests.TypeMeta.Kind == "" {
fmt.Printf("skipping file as tests.TypeMeta.Kind not found")
} else if tests.TypeMeta.Kind != "KyvernoTest" {
fmt.Printf("skipping file as tests.TypeMeta.Kind is not `KyvernoTest`")
}
Also we want this valiadtion to happen outside the cluster.
Two things come to my mind:
I notice that you are trying to build a k8s API extension manually which is a lot of rework, I will suggest you use a framework which will handle this for you. This is the recommended best practice that is used very frequently. It is too complicated to be done manually. Here are some resources. (kube-builder, operator-sdk). These solution are Open-API based as well. They will let you define your schema in a simple template, and generate all the validation and API + controller code for you.
If you want more validation and sanity testing, typically this is done with the help of an admission controller in your cluster. It intercepts the incoming request, and before it is processed by the API server, performs actions on it. (For verification, compliance, policy enforcement, authentication etc.) You can read more about admission controllers here.

Pass resource from Command To Subcommands in urfave/cli/v2

Is is possible, and if so how, to let a Command initialize a resource and pass it down to its Subcommands. Image an application that takes its arguments like
$ mycmd db --connect <...> create <...>
$ mycmd db --connect <...> update <...>
This may not be a great example but it illustrates the concept. Here db is some resource that all the subcommands depend on. I would like a single function to be responsible for the initialization of the db resource and then pass the initialized resource down to the subcommands. I can't figure out how to do this with urfave/cli/v2 .
You could do it by creating two separate cli.Apps, one that parses the db part of the arguments just to create a context.Context with context.WithValue and then use that context to create the second cli.App which would parse the remainder of the arguments. I'm sure there's a better way to do it.
I'm grateful for any help!
You can achieve this with context values. You set the value in the Before callback of the parent Command. Below code is copied and modified from the subcommands example:
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/urfave/cli/v2"
)
func main() {
app := &cli.App{
Commands: []*cli.Command{
{
Name: "db",
Before: func(c *cli.Context) error {
db := "example"
c.Context = context.WithValue(c.Context, "db", db)
return nil
},
Subcommands: []*cli.Command{
{
Name: "connect",
Action: func(c *cli.Context) error {
db := c.Context.Value("db").(string) // remember to assert to original type
fmt.Println("sub command:", db)
return nil
},
},
},
},
},
}
err := app.Run(os.Args)
if err != nil {
log.Fatal(err)
}
}
This main uses a string so that you can copy paste and run it. You can replace string with your DB object.
How to test:
$ go build -o example
$ ./example db connect
sub command: example

How to create test code that needs to use the database

I want to publish my first go library on GitHub.
I created a scan_test.go which has many tests that connect to a postgresql database. It doesn't need any data, only a valid connection since it tests result of static query result for example select 1 union select 2.
So I to release the package and that the tests would work, how do I allow configuration for the database for the tests? one idea that comes up is to use env variables? but what's the official way? how to properly create a test for my project?
example of my test file:
const (
host = "localhost"
port = 5432
user = "ufk"
password = "your-password"
dbname = "mycw"
)
type StructInt struct {
Moshe int
Moshe2 int
Moshe3 []int
Moshe4 []*int
Moshe5 []string
}
func TestVarsInStructInJsonArrayWithOneColumn(t *testing.T) {
if conn, err := GetDbConnection(); err != nil {
t.Errorf("could not connect to database: %v", err)
} else {
sqlQuery := `select json_build_array(json_build_object('moshe',55,'moshe2',66,'moshe3','{10,11}'::int[],'moshe4','{50,51}'::int[],
'moshe5','{kfir,moshe}'::text[]),
json_build_object('moshe',56,'moshe2',67,'moshe3','{41,42}'::int[],'moshe4','{21,22}'::int[],
'moshe5','{kfirrrr,moshrre}'::text[])) as moshe;`
var foo []StructInt
if isEmpty, err := Query(context.Background(), conn, &foo, sqlQuery); err != nil {
t.Errorf("failed test: %v", err)
} else if isEmpty {
log.Fatal("failed test with empty results")
}
if foo[0].Moshe != 55 {
t.Errorf("int slice test failed 21 <> %v", foo[0].Moshe)
}
if foo[1].Moshe2 != 67 {
t.Errorf("int slice failed with 82 <> %v", foo[1].Moshe2)
}
if len(foo[1].Moshe3) != 2 {
t.Errorf("int silice failed, array size should be 2 <> %v", len(foo[1].Moshe3))
}
if foo[1].Moshe3[1] != 42 {
t.Errorf("int slice failed, moshe3[0] not 2 <=> %v", foo[1].Moshe3[1])
}
if len(foo[1].Moshe4) != 2 {
t.Errorf("int silice failed, array size should be 2 <> %v", len(foo[1].Moshe4))
}
if *foo[1].Moshe4[1] != 22 {
t.Errorf("int slice failed, moshe4[1] not 4 <=> %v", foo[1].Moshe4[1])
}
}
}
func GetDbConnection() (*pgxpool.Pool, error) {
psqlInfo := fmt.Sprintf("host=%s port=%d user=%s "+
"password=%s dbname=%s sslmode=disable",
host, port, user, password, dbname)
return pgxpool.Connect(context.Background(), psqlInfo)
}
thanks
A typical approach is to combine usage of TestMain with a documented set of environment variables and/or command-line options which can be passed to the testing binary built to run tests of a particular package.
Basically, TestMain reads the environment and/or command-line options, validates them, may be populates some exported variables available to the test suite and then runs the suite.
Here at my $dayjob we use the described approach and a helper function named like SkipIfNoDatabase(t *testing.T) which inspects the state set up by TestMain and skips the test in the prologue of which it is called (after logging the reason for that) if the required configuration related to DB connectivity was not provided. This allows running a test suite w/o setting up a database—all the tests which require one will be skipped.
An example for running tests in Gitlab CI follows.
The .gitlab-ci.yml in a project contains, among other things, something like
stages:
- test
.golang_stage:
image: internal-corporate-registry:5000/go-test:1.14
variables:
MONGODB_URI: 'mongodb://test-mongodb:27017'
services:
- name: mongo:3.6-xenial
alias: test-mongodb
test_golang:
extends: .golang_stage
stage: test
tags:
- common
only:
- pushes
- schedules
script:
- make test
- make build
This configuration makes sure when our Go program which makes use of a MongoDB instance is tested, the mongo:3.6-xenial Docker image is pulled and run, and assigned a hostname test-mongodb; the MONGODB_URI environment variable is set to refer to that running MongoDB instance.
Now the program featues the package lib/testing/testdb which contains
package testdb
import (
"io"
"os"
"testing"
)
// DBName is the MongoDB database name to be used in tests.
const DBName = "blah_blah_test"
var (
// MongoDBURI identifies a MongoDB instance to use by testing suite.
MongoDBURI string
// Enabled is only set to true if the MongoDB URI was made available
// to the test suite. It can be used by individual tests to skip
// execution if an access to a MongoDB instance is required to perform
// the test.
Enabled bool
)
// Initialize initializes the package's global state.
//
// Initialize is intended to be called once per package being tested -
// typically from the package's TestMain function.
func Initialize() {
MongoDBURI = os.Getenv("MONGODB_URI")
if MongoDBURI == "" {
Enabled = false
io.WriteString(os.Stderr,
"Empty or missing environment variable MONGODB_URI; related tests will be skipped\n")
return
}
Enabled = true
}
// SkipIfDisabled skips the current test if it appears there is no
// MongoDB instance to use.
func SkipIfDisabled(t *testing.T) {
if !Enabled {
t.Skip("Empty or missing MONGODB_URI environment variable; skipped.")
}
}
…and then each package which makes use of the database contains a file named main_test.go which reads
package whatever_test
import (
"testing"
"acme.com/app/lib/testing/testdb"
"acme.com/app/lib/testing/testmain"
)
func TestMain(m *testing.M) {
testdb.Initialize()
testmain.Run(m)
}
and the tests themselves roll like this
package whatever_test
import (
"testing"
"acme.com/app/lib/testing/testdb"
)
func TestFoo(t *testing.T) {
testdb.SkipIfDisabled(t)
# Do testing
}
testmain is another internal package exporting the Run function
which runs the tests but before doing that it takes care of additional initialization: sets up logging for the app's code and figures out whether it was requested to run stress tests (which are only run at night, scheduled).
package testmain
import (
"flag"
"os"
"testing"
"acme.com/app/lib/logging"
)
// Stress is true if the stress tests are enabled in this run
// of the test suite.
var Stress bool
// Run initializes the package state and then runs the test suite the
// way `go test` does by default.
//
// Run is expected to be called from TestMain functions of the test suites
// which make use of the testmain package.
func Run(m *testing.M) {
initialize()
os.Exit(m.Run())
}
// SkipIfNotStress marks the test currently executed by t as skipped
// unless the current test suite is running with the stress tests enabled.
func SkipIfNotStress(t *testing.T) {
if !Stress {
t.Skip("Skipped test: not in stress-test mode.")
}
}
func initialize() {
if flag.Parsed() {
return
}
var logFileName string
flag.BoolVar(&Stress, "stress", false, "Run stress tests")
flag.StringVar(&logFileName, "log", "", "Name of the file to redirect log output into")
flag.Parse()
logging.SetupEx(logging.Params{
Path: logFileName,
Overwrite: true,
Mode: logging.DefaultFileMode,
})
}
The relevant bits of the project's Makefile which run stess tests look like this:
STRESS_LOG ?= stress.log
.PHONY: all test build stress install/linter
all: test build
build:
go build -ldflags='$(LDFLAGS)'
test:
go test ./...
stress:
go test -v -run=Stress -count=1 ./... -stress -log="$(STRESS_LOG)"
…and the CI configuration to run stress tests reads
stress:
extends: .golang_stage
stage: test
tags:
- go
only:
- schedules
variables:
STRESS_LOG: "$CI_PROJECT_DIR/stress.log"
artifacts:
paths:
- "$STRESS_LOG"
when: on_failure
expire_in: 1 week
script:
- make stress

Sentry Go Integration, how to specify error level?

According to the official docs https://docs.sentry.io/clients/go/ you can log errors in Sentry from a golang project with:
// For Errors
raven.CapturePanic(func() {
// do all of the scary things here
}, nil)
// For panic
if err != nil {
raven.CaptureErrorAndWait(err, nil)
log.Panic(err)
}
This works like a charm, the problem is in Sentry both functions are logged with level 'Error'. Anyone knows how can the logging level be specified for each call? In Python is very explicit, but I don't see it for Go.
Using the sentry-go SDK, the Level is set on the Scope.
Documentation:
https://pkg.go.dev/github.com/getsentry/sentry-go?tab=doc#Scope.SetLevel
https://pkg.go.dev/github.com/getsentry/sentry-go?tab=doc#Level
Example:
sentry.WithScope(func(scope *sentry.Scope) {
scope.SetLevel(sentry.LevelFatal)
sentry.CaptureException(errors.New("example error"))
})
I followed the advice in the comments, and came up with this:
// sentryErrorCapture sends error data to Sentry asynchronously. Use for non-Fatal errors.
var sentryErrorCapture = func(err error, severity raven.Severity, tags map[string]string, interfaces ...raven.Interface) string {
packet := newSentryPackage(err, severity, tags, interfaces...)
eventID, _ := raven.Capture(packet, tags)
return eventID
}
func newSentryPackage(err error, severity raven.Severity, tags map[string]string, interfaces ...raven.Interface) (packet *raven.Packet) {
interfaces = append(interfaces,
raven.NewException(err, raven.GetOrNewStacktrace(err, 1, 3, raven.IncludePaths())))
packet = &raven.Packet{
Message: err.Error(),
Level: severity,
Interfaces: interfaces,
Extra: getSentryExtraInfo(),
}
return
}
When I want to log an error specifying the level I call: sentryErrorCapture(err, raven.ERROR, nil).

Unable to test a Golang CLI tool's output

I have a cli tool written in Go which produces the following output:
Command: config get
Env: int
Component: foo-component
Unable to find any configuration within Cosmos (http://api.foo.com) for foo-component.
I would like to verify this output within a test.
The test I have written (and doesn't pass) is as follows:
package command
import (
"testing"
"github.com/my/package/foo"
)
type FakeCliContext struct{}
func (s FakeCliContext) String(name string) string {
return "foobar"
}
func ExampleInvalidComponentReturnsError() {
fakeBaseURL := "http://api.foo.com"
fakeCliContext := &FakeCliContext{}
fakeFetchFlag := func(foo.CliContext) (map[string]string, error) {
return map[string]string{
"env": "int",
"component": "foo-component",
}, nil
}
GetConfig(*fakeCliContext, fakeFetchFlag, fakeBaseURL)
// Output:
// Command: config get
// Env: int
// Component: foo-component
//
// Unable to find any configuration within Cosmos (http://api.foo.com) for foo-component.
}
The majority of the code is creating fake objects that I'm injecting into my function call GetConfig.
Effectively there is no return value from GetConfig only a side effect of text being printed to stdout.
So I'm using the Example<NameOfTest> format to try and verify the output.
But all I just back when I run go test -v is:
=== RUN ExampleInvalidComponentReturnsError
exit status 1
FAIL github.com/my/package/thing 0.418s
Does anyone know what I might be missing?
I've found that if I add an additional test after the 'Example' one above, for example called Test<NameOfTest> (but consistenting of effectively the same code), then this will actually display the function's output into my stdout when running the test:
func TestInvalidComponentReturnsError(t *testing.T) {
fakeBaseURL := "http://api.foo.com"
fakeCliContext := &FakeCliContext{}
fakeFetchFlag := func(utils.CliContext) (map[string]string, error) {
return map[string]string{
"env": "int",
"component": "foo-component",
}, nil
}
GetConfig(*fakeCliContext, fakeFetchFlag, fakeBaseURL)
}
The above example test will now show the following output when executing go test -v:
=== RUN TestInvalidComponentReturnsError
Command: config get
Env: int
Component: foo-component
Unable to find any configuration within Cosmos (http://api.foo.com) for foo-component.
exit status 1
FAIL github.com/bbc/apollo/command 0.938s
OK so the solution to this problem was part architecture and part removal/refactor of code.
I extracted the private functions from the cli command package so they became public functions in a separate function
I refactored the code so that all dependencies were injected, this then allowed me to mock these objects and verify the the expected methods were called
Now the private functions are in a package and made public, I'm able to test those things specifically, outside of the cli context
Finally, I removed the use of os.Exit as that was a nightmare to deal with and wasn't really necessary

Resources