Unable to test a Golang CLI tool's output - go

I have a cli tool written in Go which produces the following output:
Command: config get
Env: int
Component: foo-component
Unable to find any configuration within Cosmos (http://api.foo.com) for foo-component.
I would like to verify this output within a test.
The test I have written (and doesn't pass) is as follows:
package command
import (
"testing"
"github.com/my/package/foo"
)
type FakeCliContext struct{}
func (s FakeCliContext) String(name string) string {
return "foobar"
}
func ExampleInvalidComponentReturnsError() {
fakeBaseURL := "http://api.foo.com"
fakeCliContext := &FakeCliContext{}
fakeFetchFlag := func(foo.CliContext) (map[string]string, error) {
return map[string]string{
"env": "int",
"component": "foo-component",
}, nil
}
GetConfig(*fakeCliContext, fakeFetchFlag, fakeBaseURL)
// Output:
// Command: config get
// Env: int
// Component: foo-component
//
// Unable to find any configuration within Cosmos (http://api.foo.com) for foo-component.
}
The majority of the code is creating fake objects that I'm injecting into my function call GetConfig.
Effectively there is no return value from GetConfig only a side effect of text being printed to stdout.
So I'm using the Example<NameOfTest> format to try and verify the output.
But all I just back when I run go test -v is:
=== RUN ExampleInvalidComponentReturnsError
exit status 1
FAIL github.com/my/package/thing 0.418s
Does anyone know what I might be missing?
I've found that if I add an additional test after the 'Example' one above, for example called Test<NameOfTest> (but consistenting of effectively the same code), then this will actually display the function's output into my stdout when running the test:
func TestInvalidComponentReturnsError(t *testing.T) {
fakeBaseURL := "http://api.foo.com"
fakeCliContext := &FakeCliContext{}
fakeFetchFlag := func(utils.CliContext) (map[string]string, error) {
return map[string]string{
"env": "int",
"component": "foo-component",
}, nil
}
GetConfig(*fakeCliContext, fakeFetchFlag, fakeBaseURL)
}
The above example test will now show the following output when executing go test -v:
=== RUN TestInvalidComponentReturnsError
Command: config get
Env: int
Component: foo-component
Unable to find any configuration within Cosmos (http://api.foo.com) for foo-component.
exit status 1
FAIL github.com/bbc/apollo/command 0.938s

OK so the solution to this problem was part architecture and part removal/refactor of code.
I extracted the private functions from the cli command package so they became public functions in a separate function
I refactored the code so that all dependencies were injected, this then allowed me to mock these objects and verify the the expected methods were called
Now the private functions are in a package and made public, I'm able to test those things specifically, outside of the cli context
Finally, I removed the use of os.Exit as that was a nightmare to deal with and wasn't really necessary

Related

Programmatically create JUnit XML report from "go test" output

I am programmatically calling go test to run tests.
func executeAllTests(c *gin.Context) {
// Enable verbose
flag.Set("test.v", "true")
id := "myUniqueId"
// Asynchronously run the tests.
go runTestsAndWriteReport(id)
// Return
c.JSON(200, id)
}
func runTestsAndWriteReport(fileName string) {
testing.Main(
nil,
[]testing.InternalTest{
{"TestX", myTestPkg.TestX},
{"TestY", myTestPkg.TestY}
},
nil, nil,
)
// TODO: write output as JUnit XML to file "fileName"
}
I would like to write the test output in JUnit XML form to a file. There are frameworks, for example, gotestsum that can do this from the command line, however, I want to do it programmatically as shown above.
Any suggestion on how this can be done?

Pass resource from Command To Subcommands in urfave/cli/v2

Is is possible, and if so how, to let a Command initialize a resource and pass it down to its Subcommands. Image an application that takes its arguments like
$ mycmd db --connect <...> create <...>
$ mycmd db --connect <...> update <...>
This may not be a great example but it illustrates the concept. Here db is some resource that all the subcommands depend on. I would like a single function to be responsible for the initialization of the db resource and then pass the initialized resource down to the subcommands. I can't figure out how to do this with urfave/cli/v2 .
You could do it by creating two separate cli.Apps, one that parses the db part of the arguments just to create a context.Context with context.WithValue and then use that context to create the second cli.App which would parse the remainder of the arguments. I'm sure there's a better way to do it.
I'm grateful for any help!
You can achieve this with context values. You set the value in the Before callback of the parent Command. Below code is copied and modified from the subcommands example:
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/urfave/cli/v2"
)
func main() {
app := &cli.App{
Commands: []*cli.Command{
{
Name: "db",
Before: func(c *cli.Context) error {
db := "example"
c.Context = context.WithValue(c.Context, "db", db)
return nil
},
Subcommands: []*cli.Command{
{
Name: "connect",
Action: func(c *cli.Context) error {
db := c.Context.Value("db").(string) // remember to assert to original type
fmt.Println("sub command:", db)
return nil
},
},
},
},
},
}
err := app.Run(os.Args)
if err != nil {
log.Fatal(err)
}
}
This main uses a string so that you can copy paste and run it. You can replace string with your DB object.
How to test:
$ go build -o example
$ ./example db connect
sub command: example

What do I need to do to execute sample golang code having a 'named' import like below?

This is newbie question. The dependencies seems to be on github, and it's pretty obvious from the import, so why run doesn't work?
Error is: no required module provides package github.com/hashicorp/go-getter
package main
import (
"context"
"fmt"
"os"
// Problem with line below, getting error: no required module provides package
getter "github.com/hashicorp/go-getter"
)
func main() {
client := &getter.Client{
Ctx: context.Background(),
//define the destination to where the directory will be stored. This will create the directory if it doesnt exist
Dst: "/tmp/gogetter",
Dir: true,
//the repository with a subdirectory I would like to clone only
Src: "github.com/hashicorp/terraform/examples/cross-provider",
Mode: getter.ClientModeDir,
//define the type of detectors go getter should use, in this case only github is needed
Detectors: []getter.Detector{
&getter.GitHubDetector{},
},
//provide the getter needed to download the files
Getters: map[string]getter.Getter{
"git": &getter.GitGetter{},
},
}
//download the files
if err := client.Get(); err != nil {
fmt.Fprintf(os.Stderr, "Error getting path %s: %v", client.Src, err)
os.Exit(1)
}
//now you should check your temp directory for the files to see if they exist
}
Create a folder somewhere called getter, then create a file
getter/getter.go:
package main
import (
"fmt"
"github.com/hashicorp/go-getter/v2"
)
func main() {
fmt.Println(getter.ErrUnauthorized)
}
Notice I didn't use a name like you specified, as it's redundant in this case. The package is already called getter [1], so you don't need to specify the same name. Then, run:
go mod init getter
go mod tidy
go build
https://pkg.go.dev/github.com/hashicorp/go-getter/v2

How to create test code that needs to use the database

I want to publish my first go library on GitHub.
I created a scan_test.go which has many tests that connect to a postgresql database. It doesn't need any data, only a valid connection since it tests result of static query result for example select 1 union select 2.
So I to release the package and that the tests would work, how do I allow configuration for the database for the tests? one idea that comes up is to use env variables? but what's the official way? how to properly create a test for my project?
example of my test file:
const (
host = "localhost"
port = 5432
user = "ufk"
password = "your-password"
dbname = "mycw"
)
type StructInt struct {
Moshe int
Moshe2 int
Moshe3 []int
Moshe4 []*int
Moshe5 []string
}
func TestVarsInStructInJsonArrayWithOneColumn(t *testing.T) {
if conn, err := GetDbConnection(); err != nil {
t.Errorf("could not connect to database: %v", err)
} else {
sqlQuery := `select json_build_array(json_build_object('moshe',55,'moshe2',66,'moshe3','{10,11}'::int[],'moshe4','{50,51}'::int[],
'moshe5','{kfir,moshe}'::text[]),
json_build_object('moshe',56,'moshe2',67,'moshe3','{41,42}'::int[],'moshe4','{21,22}'::int[],
'moshe5','{kfirrrr,moshrre}'::text[])) as moshe;`
var foo []StructInt
if isEmpty, err := Query(context.Background(), conn, &foo, sqlQuery); err != nil {
t.Errorf("failed test: %v", err)
} else if isEmpty {
log.Fatal("failed test with empty results")
}
if foo[0].Moshe != 55 {
t.Errorf("int slice test failed 21 <> %v", foo[0].Moshe)
}
if foo[1].Moshe2 != 67 {
t.Errorf("int slice failed with 82 <> %v", foo[1].Moshe2)
}
if len(foo[1].Moshe3) != 2 {
t.Errorf("int silice failed, array size should be 2 <> %v", len(foo[1].Moshe3))
}
if foo[1].Moshe3[1] != 42 {
t.Errorf("int slice failed, moshe3[0] not 2 <=> %v", foo[1].Moshe3[1])
}
if len(foo[1].Moshe4) != 2 {
t.Errorf("int silice failed, array size should be 2 <> %v", len(foo[1].Moshe4))
}
if *foo[1].Moshe4[1] != 22 {
t.Errorf("int slice failed, moshe4[1] not 4 <=> %v", foo[1].Moshe4[1])
}
}
}
func GetDbConnection() (*pgxpool.Pool, error) {
psqlInfo := fmt.Sprintf("host=%s port=%d user=%s "+
"password=%s dbname=%s sslmode=disable",
host, port, user, password, dbname)
return pgxpool.Connect(context.Background(), psqlInfo)
}
thanks
A typical approach is to combine usage of TestMain with a documented set of environment variables and/or command-line options which can be passed to the testing binary built to run tests of a particular package.
Basically, TestMain reads the environment and/or command-line options, validates them, may be populates some exported variables available to the test suite and then runs the suite.
Here at my $dayjob we use the described approach and a helper function named like SkipIfNoDatabase(t *testing.T) which inspects the state set up by TestMain and skips the test in the prologue of which it is called (after logging the reason for that) if the required configuration related to DB connectivity was not provided. This allows running a test suite w/o setting up a database—all the tests which require one will be skipped.
An example for running tests in Gitlab CI follows.
The .gitlab-ci.yml in a project contains, among other things, something like
stages:
- test
.golang_stage:
image: internal-corporate-registry:5000/go-test:1.14
variables:
MONGODB_URI: 'mongodb://test-mongodb:27017'
services:
- name: mongo:3.6-xenial
alias: test-mongodb
test_golang:
extends: .golang_stage
stage: test
tags:
- common
only:
- pushes
- schedules
script:
- make test
- make build
This configuration makes sure when our Go program which makes use of a MongoDB instance is tested, the mongo:3.6-xenial Docker image is pulled and run, and assigned a hostname test-mongodb; the MONGODB_URI environment variable is set to refer to that running MongoDB instance.
Now the program featues the package lib/testing/testdb which contains
package testdb
import (
"io"
"os"
"testing"
)
// DBName is the MongoDB database name to be used in tests.
const DBName = "blah_blah_test"
var (
// MongoDBURI identifies a MongoDB instance to use by testing suite.
MongoDBURI string
// Enabled is only set to true if the MongoDB URI was made available
// to the test suite. It can be used by individual tests to skip
// execution if an access to a MongoDB instance is required to perform
// the test.
Enabled bool
)
// Initialize initializes the package's global state.
//
// Initialize is intended to be called once per package being tested -
// typically from the package's TestMain function.
func Initialize() {
MongoDBURI = os.Getenv("MONGODB_URI")
if MongoDBURI == "" {
Enabled = false
io.WriteString(os.Stderr,
"Empty or missing environment variable MONGODB_URI; related tests will be skipped\n")
return
}
Enabled = true
}
// SkipIfDisabled skips the current test if it appears there is no
// MongoDB instance to use.
func SkipIfDisabled(t *testing.T) {
if !Enabled {
t.Skip("Empty or missing MONGODB_URI environment variable; skipped.")
}
}
…and then each package which makes use of the database contains a file named main_test.go which reads
package whatever_test
import (
"testing"
"acme.com/app/lib/testing/testdb"
"acme.com/app/lib/testing/testmain"
)
func TestMain(m *testing.M) {
testdb.Initialize()
testmain.Run(m)
}
and the tests themselves roll like this
package whatever_test
import (
"testing"
"acme.com/app/lib/testing/testdb"
)
func TestFoo(t *testing.T) {
testdb.SkipIfDisabled(t)
# Do testing
}
testmain is another internal package exporting the Run function
which runs the tests but before doing that it takes care of additional initialization: sets up logging for the app's code and figures out whether it was requested to run stress tests (which are only run at night, scheduled).
package testmain
import (
"flag"
"os"
"testing"
"acme.com/app/lib/logging"
)
// Stress is true if the stress tests are enabled in this run
// of the test suite.
var Stress bool
// Run initializes the package state and then runs the test suite the
// way `go test` does by default.
//
// Run is expected to be called from TestMain functions of the test suites
// which make use of the testmain package.
func Run(m *testing.M) {
initialize()
os.Exit(m.Run())
}
// SkipIfNotStress marks the test currently executed by t as skipped
// unless the current test suite is running with the stress tests enabled.
func SkipIfNotStress(t *testing.T) {
if !Stress {
t.Skip("Skipped test: not in stress-test mode.")
}
}
func initialize() {
if flag.Parsed() {
return
}
var logFileName string
flag.BoolVar(&Stress, "stress", false, "Run stress tests")
flag.StringVar(&logFileName, "log", "", "Name of the file to redirect log output into")
flag.Parse()
logging.SetupEx(logging.Params{
Path: logFileName,
Overwrite: true,
Mode: logging.DefaultFileMode,
})
}
The relevant bits of the project's Makefile which run stess tests look like this:
STRESS_LOG ?= stress.log
.PHONY: all test build stress install/linter
all: test build
build:
go build -ldflags='$(LDFLAGS)'
test:
go test ./...
stress:
go test -v -run=Stress -count=1 ./... -stress -log="$(STRESS_LOG)"
…and the CI configuration to run stress tests reads
stress:
extends: .golang_stage
stage: test
tags:
- go
only:
- schedules
variables:
STRESS_LOG: "$CI_PROJECT_DIR/stress.log"
artifacts:
paths:
- "$STRESS_LOG"
when: on_failure
expire_in: 1 week
script:
- make stress

Is it Possible to have Multiple lambda functions in single Go binary?

I am experimenting with Go on AWS lambda, and i found that each function requires a binary to be uploaded for execution.
My question is that, is it possible to have a single binary that can have two different Handler functions, which can be loaded by two different lambda functions.
for example
func Handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
fmt.Println("Received body in Handler 1: ", request.Body)
return events.APIGatewayProxyResponse{Body: request.Body, StatusCode: 200}, nil
}
func Handler1(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
fmt.Println("Received body in Handler 2: ", request.Body)
return events.APIGatewayProxyResponse{Body: request.Body, StatusCode: 200}, nil
}
func EndPoint1() {
lambda.Start(Handler)
}
func EndPoint2() {
lambda.Start(Handler1)
}
and calling the EndPoints in main in such a way that it registers both the EndPoints and the same binary would be uploaded to both the functions MyFunction1 and MyFunction2.
I understand that having two different binary is good because it reduces the load/size of each function.
But this is just an experimentation.
Thanks in advance :)
I believe it is not possible since Lambda console says the handler is the name of the executable file:
Handler: The executable file name value. For example, "myHandler" would call the main function in the package “main” of the myHandler executable program.
So one single executable file is unable to host two different handlers.
you can do the workaround to have same executable to upload to two different lambda like following
func main() {
switch cfg.LambdaCommand {
case "select_lambda_1":
lambda1handler()
case "select_lambda_2":
lambda2handler()
I've had success by adding an environment variable to the functions in the template.yaml, and making the main() check the variable and call the appropriate handler.

Resources