Is is possible, and if so how, to let a Command initialize a resource and pass it down to its Subcommands. Image an application that takes its arguments like
$ mycmd db --connect <...> create <...>
$ mycmd db --connect <...> update <...>
This may not be a great example but it illustrates the concept. Here db is some resource that all the subcommands depend on. I would like a single function to be responsible for the initialization of the db resource and then pass the initialized resource down to the subcommands. I can't figure out how to do this with urfave/cli/v2 .
You could do it by creating two separate cli.Apps, one that parses the db part of the arguments just to create a context.Context with context.WithValue and then use that context to create the second cli.App which would parse the remainder of the arguments. I'm sure there's a better way to do it.
I'm grateful for any help!
You can achieve this with context values. You set the value in the Before callback of the parent Command. Below code is copied and modified from the subcommands example:
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/urfave/cli/v2"
)
func main() {
app := &cli.App{
Commands: []*cli.Command{
{
Name: "db",
Before: func(c *cli.Context) error {
db := "example"
c.Context = context.WithValue(c.Context, "db", db)
return nil
},
Subcommands: []*cli.Command{
{
Name: "connect",
Action: func(c *cli.Context) error {
db := c.Context.Value("db").(string) // remember to assert to original type
fmt.Println("sub command:", db)
return nil
},
},
},
},
},
}
err := app.Run(os.Args)
if err != nil {
log.Fatal(err)
}
}
This main uses a string so that you can copy paste and run it. You can replace string with your DB object.
How to test:
$ go build -o example
$ ./example db connect
sub command: example
Related
This is newbie question. The dependencies seems to be on github, and it's pretty obvious from the import, so why run doesn't work?
Error is: no required module provides package github.com/hashicorp/go-getter
package main
import (
"context"
"fmt"
"os"
// Problem with line below, getting error: no required module provides package
getter "github.com/hashicorp/go-getter"
)
func main() {
client := &getter.Client{
Ctx: context.Background(),
//define the destination to where the directory will be stored. This will create the directory if it doesnt exist
Dst: "/tmp/gogetter",
Dir: true,
//the repository with a subdirectory I would like to clone only
Src: "github.com/hashicorp/terraform/examples/cross-provider",
Mode: getter.ClientModeDir,
//define the type of detectors go getter should use, in this case only github is needed
Detectors: []getter.Detector{
&getter.GitHubDetector{},
},
//provide the getter needed to download the files
Getters: map[string]getter.Getter{
"git": &getter.GitGetter{},
},
}
//download the files
if err := client.Get(); err != nil {
fmt.Fprintf(os.Stderr, "Error getting path %s: %v", client.Src, err)
os.Exit(1)
}
//now you should check your temp directory for the files to see if they exist
}
Create a folder somewhere called getter, then create a file
getter/getter.go:
package main
import (
"fmt"
"github.com/hashicorp/go-getter/v2"
)
func main() {
fmt.Println(getter.ErrUnauthorized)
}
Notice I didn't use a name like you specified, as it's redundant in this case. The package is already called getter [1], so you don't need to specify the same name. Then, run:
go mod init getter
go mod tidy
go build
https://pkg.go.dev/github.com/hashicorp/go-getter/v2
I want to publish my first go library on GitHub.
I created a scan_test.go which has many tests that connect to a postgresql database. It doesn't need any data, only a valid connection since it tests result of static query result for example select 1 union select 2.
So I to release the package and that the tests would work, how do I allow configuration for the database for the tests? one idea that comes up is to use env variables? but what's the official way? how to properly create a test for my project?
example of my test file:
const (
host = "localhost"
port = 5432
user = "ufk"
password = "your-password"
dbname = "mycw"
)
type StructInt struct {
Moshe int
Moshe2 int
Moshe3 []int
Moshe4 []*int
Moshe5 []string
}
func TestVarsInStructInJsonArrayWithOneColumn(t *testing.T) {
if conn, err := GetDbConnection(); err != nil {
t.Errorf("could not connect to database: %v", err)
} else {
sqlQuery := `select json_build_array(json_build_object('moshe',55,'moshe2',66,'moshe3','{10,11}'::int[],'moshe4','{50,51}'::int[],
'moshe5','{kfir,moshe}'::text[]),
json_build_object('moshe',56,'moshe2',67,'moshe3','{41,42}'::int[],'moshe4','{21,22}'::int[],
'moshe5','{kfirrrr,moshrre}'::text[])) as moshe;`
var foo []StructInt
if isEmpty, err := Query(context.Background(), conn, &foo, sqlQuery); err != nil {
t.Errorf("failed test: %v", err)
} else if isEmpty {
log.Fatal("failed test with empty results")
}
if foo[0].Moshe != 55 {
t.Errorf("int slice test failed 21 <> %v", foo[0].Moshe)
}
if foo[1].Moshe2 != 67 {
t.Errorf("int slice failed with 82 <> %v", foo[1].Moshe2)
}
if len(foo[1].Moshe3) != 2 {
t.Errorf("int silice failed, array size should be 2 <> %v", len(foo[1].Moshe3))
}
if foo[1].Moshe3[1] != 42 {
t.Errorf("int slice failed, moshe3[0] not 2 <=> %v", foo[1].Moshe3[1])
}
if len(foo[1].Moshe4) != 2 {
t.Errorf("int silice failed, array size should be 2 <> %v", len(foo[1].Moshe4))
}
if *foo[1].Moshe4[1] != 22 {
t.Errorf("int slice failed, moshe4[1] not 4 <=> %v", foo[1].Moshe4[1])
}
}
}
func GetDbConnection() (*pgxpool.Pool, error) {
psqlInfo := fmt.Sprintf("host=%s port=%d user=%s "+
"password=%s dbname=%s sslmode=disable",
host, port, user, password, dbname)
return pgxpool.Connect(context.Background(), psqlInfo)
}
thanks
A typical approach is to combine usage of TestMain with a documented set of environment variables and/or command-line options which can be passed to the testing binary built to run tests of a particular package.
Basically, TestMain reads the environment and/or command-line options, validates them, may be populates some exported variables available to the test suite and then runs the suite.
Here at my $dayjob we use the described approach and a helper function named like SkipIfNoDatabase(t *testing.T) which inspects the state set up by TestMain and skips the test in the prologue of which it is called (after logging the reason for that) if the required configuration related to DB connectivity was not provided. This allows running a test suite w/o setting up a database—all the tests which require one will be skipped.
An example for running tests in Gitlab CI follows.
The .gitlab-ci.yml in a project contains, among other things, something like
stages:
- test
.golang_stage:
image: internal-corporate-registry:5000/go-test:1.14
variables:
MONGODB_URI: 'mongodb://test-mongodb:27017'
services:
- name: mongo:3.6-xenial
alias: test-mongodb
test_golang:
extends: .golang_stage
stage: test
tags:
- common
only:
- pushes
- schedules
script:
- make test
- make build
This configuration makes sure when our Go program which makes use of a MongoDB instance is tested, the mongo:3.6-xenial Docker image is pulled and run, and assigned a hostname test-mongodb; the MONGODB_URI environment variable is set to refer to that running MongoDB instance.
Now the program featues the package lib/testing/testdb which contains
package testdb
import (
"io"
"os"
"testing"
)
// DBName is the MongoDB database name to be used in tests.
const DBName = "blah_blah_test"
var (
// MongoDBURI identifies a MongoDB instance to use by testing suite.
MongoDBURI string
// Enabled is only set to true if the MongoDB URI was made available
// to the test suite. It can be used by individual tests to skip
// execution if an access to a MongoDB instance is required to perform
// the test.
Enabled bool
)
// Initialize initializes the package's global state.
//
// Initialize is intended to be called once per package being tested -
// typically from the package's TestMain function.
func Initialize() {
MongoDBURI = os.Getenv("MONGODB_URI")
if MongoDBURI == "" {
Enabled = false
io.WriteString(os.Stderr,
"Empty or missing environment variable MONGODB_URI; related tests will be skipped\n")
return
}
Enabled = true
}
// SkipIfDisabled skips the current test if it appears there is no
// MongoDB instance to use.
func SkipIfDisabled(t *testing.T) {
if !Enabled {
t.Skip("Empty or missing MONGODB_URI environment variable; skipped.")
}
}
…and then each package which makes use of the database contains a file named main_test.go which reads
package whatever_test
import (
"testing"
"acme.com/app/lib/testing/testdb"
"acme.com/app/lib/testing/testmain"
)
func TestMain(m *testing.M) {
testdb.Initialize()
testmain.Run(m)
}
and the tests themselves roll like this
package whatever_test
import (
"testing"
"acme.com/app/lib/testing/testdb"
)
func TestFoo(t *testing.T) {
testdb.SkipIfDisabled(t)
# Do testing
}
testmain is another internal package exporting the Run function
which runs the tests but before doing that it takes care of additional initialization: sets up logging for the app's code and figures out whether it was requested to run stress tests (which are only run at night, scheduled).
package testmain
import (
"flag"
"os"
"testing"
"acme.com/app/lib/logging"
)
// Stress is true if the stress tests are enabled in this run
// of the test suite.
var Stress bool
// Run initializes the package state and then runs the test suite the
// way `go test` does by default.
//
// Run is expected to be called from TestMain functions of the test suites
// which make use of the testmain package.
func Run(m *testing.M) {
initialize()
os.Exit(m.Run())
}
// SkipIfNotStress marks the test currently executed by t as skipped
// unless the current test suite is running with the stress tests enabled.
func SkipIfNotStress(t *testing.T) {
if !Stress {
t.Skip("Skipped test: not in stress-test mode.")
}
}
func initialize() {
if flag.Parsed() {
return
}
var logFileName string
flag.BoolVar(&Stress, "stress", false, "Run stress tests")
flag.StringVar(&logFileName, "log", "", "Name of the file to redirect log output into")
flag.Parse()
logging.SetupEx(logging.Params{
Path: logFileName,
Overwrite: true,
Mode: logging.DefaultFileMode,
})
}
The relevant bits of the project's Makefile which run stess tests look like this:
STRESS_LOG ?= stress.log
.PHONY: all test build stress install/linter
all: test build
build:
go build -ldflags='$(LDFLAGS)'
test:
go test ./...
stress:
go test -v -run=Stress -count=1 ./... -stress -log="$(STRESS_LOG)"
…and the CI configuration to run stress tests reads
stress:
extends: .golang_stage
stage: test
tags:
- go
only:
- schedules
variables:
STRESS_LOG: "$CI_PROJECT_DIR/stress.log"
artifacts:
paths:
- "$STRESS_LOG"
when: on_failure
expire_in: 1 week
script:
- make stress
I have go lang code to read some json file. It's running fine in local but I created Lambda package and uploaded the package in Lambda. It cannot read the file
import (
"context"
"fmt"
"io/ioutil"
"github.com/aws/aws-lambda-go/lambda"
)
type MyEvent struct {
Name string `json:"name"`
}
func HandleRequest(ctx context.Context, name MyEvent) (string, error) {
jsonBytes, err := ioutil.ReadFile("mappings.json")
fmt.Println(string(jsonBytes))
fmt.Println(err)
return fmt.Sprintf("Hello %s!", name.Name), nil
}
func main() {
lambda.Start(HandleRequest)
}
How to read the file from AWS Lambda? Any idea on this?
I have used your sample code and put in the zip file and also the mappings file that I used to test on AWS lambda. Link to code - https://github.com/nihanthd/stackoverflow/tree/master/lambda
Handler name in AWS lambda is trial
Test Data to trigger the function using AWS lambda event
{
"name": "Vignesh"
}
Commands used to build the executable and create the zip file
$ GOARCH=amd64 GOOS=linux go build trial.go
$ zip trial.zip trial mappings.json
I have a cli tool written in Go which produces the following output:
Command: config get
Env: int
Component: foo-component
Unable to find any configuration within Cosmos (http://api.foo.com) for foo-component.
I would like to verify this output within a test.
The test I have written (and doesn't pass) is as follows:
package command
import (
"testing"
"github.com/my/package/foo"
)
type FakeCliContext struct{}
func (s FakeCliContext) String(name string) string {
return "foobar"
}
func ExampleInvalidComponentReturnsError() {
fakeBaseURL := "http://api.foo.com"
fakeCliContext := &FakeCliContext{}
fakeFetchFlag := func(foo.CliContext) (map[string]string, error) {
return map[string]string{
"env": "int",
"component": "foo-component",
}, nil
}
GetConfig(*fakeCliContext, fakeFetchFlag, fakeBaseURL)
// Output:
// Command: config get
// Env: int
// Component: foo-component
//
// Unable to find any configuration within Cosmos (http://api.foo.com) for foo-component.
}
The majority of the code is creating fake objects that I'm injecting into my function call GetConfig.
Effectively there is no return value from GetConfig only a side effect of text being printed to stdout.
So I'm using the Example<NameOfTest> format to try and verify the output.
But all I just back when I run go test -v is:
=== RUN ExampleInvalidComponentReturnsError
exit status 1
FAIL github.com/my/package/thing 0.418s
Does anyone know what I might be missing?
I've found that if I add an additional test after the 'Example' one above, for example called Test<NameOfTest> (but consistenting of effectively the same code), then this will actually display the function's output into my stdout when running the test:
func TestInvalidComponentReturnsError(t *testing.T) {
fakeBaseURL := "http://api.foo.com"
fakeCliContext := &FakeCliContext{}
fakeFetchFlag := func(utils.CliContext) (map[string]string, error) {
return map[string]string{
"env": "int",
"component": "foo-component",
}, nil
}
GetConfig(*fakeCliContext, fakeFetchFlag, fakeBaseURL)
}
The above example test will now show the following output when executing go test -v:
=== RUN TestInvalidComponentReturnsError
Command: config get
Env: int
Component: foo-component
Unable to find any configuration within Cosmos (http://api.foo.com) for foo-component.
exit status 1
FAIL github.com/bbc/apollo/command 0.938s
OK so the solution to this problem was part architecture and part removal/refactor of code.
I extracted the private functions from the cli command package so they became public functions in a separate function
I refactored the code so that all dependencies were injected, this then allowed me to mock these objects and verify the the expected methods were called
Now the private functions are in a package and made public, I'm able to test those things specifically, outside of the cli context
Finally, I removed the use of os.Exit as that was a nightmare to deal with and wasn't really necessary
I have been looking for an example GAE script in go to get my image that I got from the resulted screenshot of PageSpeed Insights and saved it as json_decode object using Kohana/Cache to Google Cloud Storage (GCS).
The reason of this method is simply because I found this Kohana model is the most convenient way writing files to GCS, although I am seeking also other way like this to write files to GCS using Blobstore to serve them while the Go API Files has been deprecate as documented here.
Here is the form of stored object containing the screenshot image data (base64) which is saved as public in default application bucket with object name images/thumb/mythumb.jpg:
stdClass Object
(
[screenshot] => stdClass Object
(
[data] => _9j_4AAQSkZJRgABAQAAAQABAAD_...= // base64 data
[height] => 240
[mime_type] => image/jpeg
[width] => 320
)
[otherdata] => Array
(
[..] => ..
[..] => ..
)
)
I want to get this image that set as public using my customized url as below that to be proceed through go module and also I need it to be expired in a certain time because I have managed to update the image content itself regularly:
http://myappId.appspot.com/image/thumb/mythumb.jpg
I have set in disptach.yaml to send all image request to my go module as below:
- url: "*/images/*"
module: go
and set the handler in go.yaml to proceed the image request as below:
handlers:
- url: /images/thumb/.*
script: _go_app
- url: /images
static_dir: images
Using this directive I have got that all /images/ request (other than /images/thumb/ request) serve images from the static directory and that /images/thumb/mythumb.jpg goes to the module application.
So left what code I have to use (see ????) in my application file named thumb.go as below:
package thumb
import(
//what to import
????
????
)
const (
googleAccessID = "<serviceAccountEmail>#developer.gserviceaccount.com"
serviceAccountPEMFilename = "YOUR_SERVICE_ACCOUNT_KEY.pem"
bucket = "myappId.appspot.com"
)
var (
expiration = time.Now().Add(time.Second * 60) //expire in 60 seconds
)
func init() {
http.HandleFunc("/images/thumb/", handleThumb)
}
func handleThumb(w http.ResponseWriter, r *http.Request) {
ctx := cloud.NewContext(appengine.AppID(c), hc)
???? //what code to get the string of 'mythumb.jpg' from url
???? //what code to get the image stored data from GCS
???? //what code to encoce base64 data
w.Header().Set("Content-Type", "image/jpeg;")
fmt.Fprintf(w, "%v", mythumb.jpg)
}
I have taken many codes from some examples like this, this or this but could not get one works so far. I have also tried a sample from this which is almost close to my case but also found no luck.
So in generally t was mainly due to lack on what are the correct code to be put on the line that I marked by ???? as well the relevant library or path to be imported. I have also checked the GCS permission if something have been missing as described here and here.
I shall thank you much for your help and advise.
From what I've read in your description, it seems that the only relevant parts are the ???? lines in the actual Go code. Let me know if that's not the case.
First ????: "what code to get the string of 'mythumb.jpg' from url"?
From reading the code, you're looking to extract mythumb.jpg from a url like http://localhost/images/thumb/mythumb.jpg. A working example is available at the Writing Web Applications tutorial:
package main
import (
"fmt"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hi there, I love %s!", r.URL.Path[1:])
}
func main() {
http.HandleFunc("/", handler)
http.ListenAndServe(":8080", nil)
}
Such that
http://localhost:8080/monkeys
Prints
Hi there, I love monkeys!
Second ????: "what code to get the image stored data from GCS"?
The API method you're probably looking to use is storage.objects.get.
You did link to one of the JSON API Go Examples for Google Cloud Storage, which is a good general reference, but is not related to the problem you're trying to solve. That particular example is put together for Client-side applications (hence the redirectURL = "urn:ietf:wg:oauth:2.0:oob" line). Additionally, this sample uses deprecated/out-of-date oauth2 and storage packages.
One of the cleanest (and non-deprecated) ways to do this for an application which wants to access its own buckets on behalf of itself would be to use the golang/oauth2 and Google APIs Client Library for Go packages.
An example of how to authenticate with JSON Web Token auth with the golang/oauth2 package is available in the repo:
func ExampleJWTConfig() {
conf := &jwt.Config{
Email: "xxx#developer.com",
// The contents of your RSA private key or your PEM file
// that contains a private key.
// If you have a p12 file instead, you
// can use `openssl` to export the private key into a pem file.
//
// $ openssl pkcs12 -in key.p12 -out key.pem -nodes
//
// It only supports PEM containers with no passphrase.
PrivateKey: []byte("-----BEGIN RSA PRIVATE KEY-----..."),
Subject: "user#example.com",
TokenURL: "https://provider.com/o/oauth2/token",
}
// Initiate an http.Client, the following GET request will be
// authorized and authenticated on the behalf of user#example.com.
client := conf.Client(oauth2.NoContext)
client.Get("...")
}
Next, instead of using the oauth2 client directly, use that client with the Google APIs Client Library for Go mentioned earlier:
service, err := storage.New(client)
if err != nil {
fatalf(service, "Failed to create service %v", err)
}
Notice the similarity to the out-of-date JSON API Go Examples?
In your handler, you'll want to go out and get the related object using func ObjectsService.Get. Assuming that you know the name of the object and bucket, that is.
Straight from the previous example, you can use code similar to what's below to retrieve the download link:
if res, err := service.Objects.Get(bucketName, objectName).Do(); err == nil {
fmt.Printf("The media download link for %v/%v is %v.\n\n", bucketName, res.Name, res.MediaLink)
} else {
fatalf(service, "Failed to get %s/%s: %s.", bucketName, objectName, err)
}
Then, fetch the file, or do whatever you want with it. Full example:
import (
"golang.org/x/oauth2"
"golang.org/x/oauth2/jwt"
"google.golang.org/api/storage/v1"
"fmt"
)
...
const (
bucketName = "YOUR_BUCKET_NAME"
objectName = "mythumb.jpg"
)
func main() {
conf := &jwt.Config{
Email: "xxx#developer.com",
PrivateKey: []byte("-----BEGIN RSA PRIVATE KEY-----..."),
Subject: "user#example.com",
TokenURL: "https://provider.com/o/oauth2/token",
}
client := conf.Client(oauth2.NoContext)
service, err := storage.New(client)
if err != nil {
fatalf(service, "Failed to create service %v", err)
}
if res, err := service.Objects.Get(bucketName, objectName).Do(); err == nil {
fmt.Printf("The media download link for %v/%v is %v.\n\n", bucketName, res.Name, res.MediaLink)
} else {
fatalf(service, "Failed to get %s/%s: %s.", bucketName, objectName, err)
}
// Go fetch the file, etc.
}
Third ????: "what code to encoce base64 data"?
Pretty simple with the encoding/base64 package. SO simple, that they've included an example:
package main
import (
"encoding/base64"
"fmt"
)
func main() {
data := []byte("any + old & data")
str := base64.StdEncoding.EncodeToString(data)
fmt.Println(str)
}
Hope that helps.