How to unit test with go-git - go

How do I write a unit test for my code that clones a repo using git-go
Below is a sample of the function I have created. I am cloning multiple repos and reading a particular file that is in that repo, I am unsure how to unit test this function.
func cloneRepository(repository string) (repo beans.Repo) {
dir, err := os.MkdirTemp("./", "temp") //To create a temp folder to clone repo in
if err != nil...
_, err := git.PlainClone(dir, false, &git.CloneOptions{
URL: repository,
Depth: 1,
})
var repo beans.Repo
if err = util.ParseYmlFile("filename.yml", &repo) // Custom util function to parse a file in the repository
if err = os.RemoveAll(dir); err != nil{...}
return repo
}

You can mock the git.PlainClone() function so it returns a custom file for your tests.
Take a look into spf13's lib, that provides a filesystem mocking solution!

What we did in the past was create a bare git repository with some predefined content, put it under e.g. testdata/myrepo.git and use it during unit-testing.
Commit the repo normally as part of your project.

Related

How to use Go to get the Github commit history of a given file of a repository

Like the title said, my question here is how to use Go to programmatically get the Github commit history of a given file in a given repository
It seems that you need to access GitHub api from golang. There are a plenty of libraries but I would recommend using go-github.
Here is how you can try doing that
package main
import (
"context"
"github.com/google/go-github/github"
)
func main() {
var username string = "MayukhSobo"
client := github.NewClient(nil)
commits, response, err := client.Repositories.ListCommits(context.Background(), username, "Awesome-Snippets", nil)
if err != nil && response.StatusCode != 200 {
panic(err)
}
for _, commit := range commits {
// commit.SHA
// commit.Files
// You can use the commit
}
}
If you are trying to access the some other public repo, you need to pass the owner name in the username and change the repo name.
If you face access issues, it can be probably it is a private repo. You can also use the key pair to set up the access.

TestMain for all tests?

I have a fairly large project with many integration tests sprinkled throughout different packages. I'm using build tags to separate unit, integration and e2e tests.
I need to do some setup before running my integration and e2e tests, so I put a TestMain function in a main_test.go file in the root directory. It's pretty simple:
//go:build integration || e2e
// +build integration e2e
package test
import (
...
)
func TestMain(m *testing.M) {
if err := setup(); err != nil {
os.Exit(1)
}
exitCode := m.Run()
if err := tearDown(); err != nil {
os.Exit(1)
}
os.Exit(exitCode)
}
func setup() error {
// setup stuff here...
return nil
}
func tearDown() error {
// tear down stuff here...
return nil
}
However, when I run test:
$ go test -v --tags=integration ./...
testing: warning: no tests to run
PASS
# all of my subdirectory tests now run and fail...
I really don't want to write a TestMain in each package that requires it and was hoping I could just get away with one in the root. Is there any solution that you could suggest? Thanks.
The only alternative I can think of is setting up everything outside of code and then running the integration tests. Maybe some shell script that does setup and then calls $ go test?
The go test ./... command compiles a test binary for each package in the background and runs them one by one. This is also the reason you get a cannot use -o flag with multiple packages error if you attempt to specify an output. This is the reason code in your main package doesn't effect your sub packages.
So the only way to get this to work is to put all your setup logic in sort of "setup" package and call the shared code from all of your sub-packages(still a lot of work, I know).
Trying to avoid code repetition, I used a function that makes the setup/teardown and evaluates a function as a test.
The function should look like this
func WithTestSetup(t *testing.T, testFunction func()) {
// setup code
testFunction()
// teardown code
}
I use the t *testing.T argument to report errors in setup or teardown, but it can be omitted.
Then in your tests you can make:
func TestFoo(t *testing.T) {
WithTestSetup(
t, func() {
if err := Foo(); err != nil {
t.Fatal(err)
}
},
)
}
Just call WithTestSetup if needed, looks easier for me than add a bunch of TestMains on the project.

How can I list all image URLs inside a GCP project with an API?

I'm trying to write an application in GO that will get all the image vulnerabilities inside a GCP project for me using the Container Analysis API.
The GO Client library for this API has the function findVulnerabilityOccurrencesForImage() to do this, however it requires you to pass the URL of the image you want to get the vulnerability report from in the form resourceURL := "https://gcr.io/my-project/my-repo/my-image" and the projectID. This means that if there are multiple images in your project, you have to list and store them first and only after that you can recursively call the findVulnerabilityOccurrencesForImage() function to get ALL of the vulnerabilities.
So I need a way to get and store all of the images' URLs inside all of the repos inside a given GCP project, but so far I couldn't find a solution. I can easily do that in the CLI by running gcloud container images list command but I don't see a way how that can be done with an API.
Thank you in advance for your help!
You can use the Cloud Storage package and the Objects method to do so. For example:
func GetURLs() ([]string, error) {
bucket := "bucket-name"
urls := []string{}
results := client.Bucket(bucket).Objects(context.Background(), nil)
for {
attrs, err := results.Next()
if err != nil {
if err == iterator.Done {
break
}
return nil, fmt.Errorf("iterating results: %w", err)
}
urls = append(urls, fmt.Sprint("https://storage.googleapis.com", "/", bucket, "/", attrs.Name))
}
return urls, nil
}

Set up database for integration tests with TestMain across multiple packages

I am trying to write database integration tests in my go application for my repositories files.
My idea was to leverage the TestMain function to do the database bootstrap before the tests are run.
Example:
test/integration/integration_test.go
// +build integrationdb
package main
func TestMain(m *testing.M) {
flag.Parse()
// setup database
setupDB()
// run tests
os.Exit(m.Run())
}
Because this is global to all my integration tests I have placed this code in the test/integration package.
Then in each module/package of my application, together with my repository.go code, I have a repository_test.go with my test code:
// +build integrationdb
package todo_test
import (
"github.com/brpaz/go-api-sample/internal/todo"
"github.com/brpaz/go-api-sample/test/testutil"
"github.com/stretchr/testify/assert"
"testing"
)
func TestPgRepository_CreateTodo(t *testing.T) {
db, err := testutil.GetTestDBConnection()
if err != nil {
t.Fatal(err)
}
repo := todo.NewPgRepository(db)
err = repo.CreateTodo(todo.CreateTodo{
Description: "some-todo",
})
assert.Nil(t, err)
}
The setup works fine, the issue is that when running the tests, it says "testing: warning: no tests to run" and the tests are still executed, without running the TestMain function.
It seems that the TestMain function is per package? Is that true? Any workarounds?
I could put all the test files into a unique and separate package (I do that for higher-level tests like acceptance and smoke tests), but since I am testing each repository individually, It doesn't differ that much from a unit test, so I think it makes sense to keep then together with the code.
I guess I could create a TestMain per package and have some global variable in my global testutils package that kept if the database was initialized and run the setup if not set. Not really a fan of globals.
The other alternative I see is to move the setup code outside of the test logic. I prefer it to keep tightly integrated with Go as it makes it easier to defer cleanup functions for example, after the tests run.
Maybe I can create my own "integration test runner" command, that would run the setup code and then call "go test" programmatically?
Any other ideas?
The tests of each package can be run independently as they should. The only missing link was the bootstrap and teardown of the test database.
I decided to create a command in my application that will bootstrap the tests database and then running the "go test".
I could have this bootstrap logic separated, let´s say in a bash script, but I feel this way makes it easier.
Here is the code I ended up with for reference:
test/integration/db/main.go
func main() {
flag.Parse()
log.Println("Setup DB")
// setupDB creates a new test database and runs the application migrations before the tests start.
setupDB()
log.Println("Running Tests")
// calls go test to execute the tests.
runTests()
// delete the test database.
teardownDB()
}
func runTests() {
// TODO this flags can be passed from the original command instead.
cmd := exec.Command("go", "test", "-v", "--tags", "integrationdb", "-p", "1", "./...")
cmd.Env = os.Environ()
cmd.Stdout = os.Stdout
if err := cmd.Start(); err != nil {
log.Fatal(err)
}
if err := cmd.Wait(); err != nil {
log.Fatal(err)
}
}

Google Cloud Bigtable authentication with Go

I'm trying to insert a simple record as in GoDoc. But this returns,
rpc error: code = 7 desc = "User can't access project: tidy-groove"
When I searched for grpc codes, it says..
PermissionDenied Code = 7
// Unauthenticated indicates the request does not have valid
// authentication credentials for the operation.
I've enabled Big table in my console and created a cluster and a service account and recieved the json. What I'm doing wrong here?
package main
import (
"fmt"
"golang.org/x/net/context"
"golang.org/x/oauth2/google"
"google.golang.org/cloud"
"google.golang.org/cloud/bigtable"
"io/ioutil"
)
func main() {
fmt.Println("Start!")
put()
}
func getClient() *bigtable.Client {
jsonKey, err := ioutil.ReadFile("TestProject-7854ea9op741.json")
if err != nil {
fmt.Println(err.Error())
}
config, err := google.JWTConfigFromJSON(
jsonKey,
bigtable.Scope,
) // or bigtable.AdminScope, etc.
if err != nil {
fmt.Println(err.Error())
}
ctx := context.Background()
client, err := bigtable.NewClient(ctx, "tidy-groove", "asia-east1-b", "test1-bigtable", cloud.WithTokenSource(config.TokenSource(ctx)))
if err != nil {
fmt.Println(err.Error())
}
return client
}
func put() {
ctx := context.Background()
client := getClient()
tbl := client.Open("table1")
mut := bigtable.NewMutation()
mut.Set("links", "maps.google.com", bigtable.Now(), []byte("1"))
mut.Set("links", "golang.org", bigtable.Now(), []byte("1"))
err := tbl.Apply(ctx, "com.google.cloud", mut)
if err != nil {
fmt.Println(err.Error())
}
}
I've solved the problem. It's nothing wrong with the code, but config json itself. So anyone who out there want to authenticate and came here by google search... This code is correct and working perfectly. What I've done wrong is follows.
First I made a service account and got the json. But google warned me that im not an owner of project hence it wont be added to accept list but anyway it let me download the json.
Then I deleted that key from console and requested project owner to create a key for me.
There he has created another key with the same name I given.. And since he's the owner no error/warning msgs displayed and successfully json file was downloaded.
When I tried with that... my question begun. That's when i posted this question.
After that with no solutions. I asked owner to delete that key and create another key but with a different name..
Then it worked! It seems if you try to create a key with non-owner account and then again create with same name ( after deleting original of course ) has no effect. Hope this helps everyone out there :)
Take a look at: helloworld.go or search.go which uses GOOGLE_APPLICATION_CREDENTIALS environment variable.
For most environments, you no longer even need to set GOOGLE_APPLICATION_CREDENTIALS. Google Cloud Platform, Managed VMs or Google App Engine all have the right thing set for you. Your desktop environment will also be correct if you've used gcloud init or it's predecessor gcloud auth login followed by gcloud config set project <projectID>.

Resources