How to do integration test for a service that depends on another service in a microservice environment? - go

I am building a microservice app, and currently writing some tests. The function that I am testing is below where it's owned by cart service and tries to get all cart items and append the item details with other details of each item from catalog service.
func (s *Server) Grpc_GetCartItems(ctx context.Context, in *pb.GetCartItemsRequest) (*pb.ItemsResponse, error) {
// Get product ids and its quantity in cart by userId
res, err := s.Repo.GetCartItems(ctx, in.UserId)
if err != nil {
return nil, err
}
// Return empty response if there is no items in cart
if len(res) == 0 {
return &pb.ItemsResponse{}, nil
}
// Get Product ID Keys from map
ids := GetMapKeys(res)
// RPC call catalog server to get cart products' names
products, err := s.CatalogClient.Grpc_GetProductsByIds(ctx, &catalogpb.GetProductsByIdsRequest{ProductIds: ids})
if err != nil{
return nil, err
}
// Return response in format product id, product name, and qty in cart
items, err := AppendItemToResponse(products, res)
if err != nil{
return nil, err
}
return items, nil
}
The problem is for the test setup, I need to seed some test data to both of the cart and catalog repositories. I can do that with cart repo just fine, but for the catalog is it a common practice to just mock the dependency s.CatalogClient.Grpc_GetProductsByIds instead? I am still new to testing, and from what I understand you generally don't do mocking in integration tests, but I am not sure if there's a better way to tackle this kind of issue.

You're correct in that for an integration test you would not mock a service.
Usually, if it is a service you do not have control over, you would stub the service.
Integration tests can be run against staging or testing services (in an E2E capacity) or in a virtual environment (like compose, K8S, etc.).
I think for your requirement, I would stage it using docker-compose or something similar. If you intend to go for an E2E setup in the future, you may want to look into having a testing environment.
See: https://www.testenvironmentmanagement.com/types-of-testing-environments/

Obligatory "you should not be calling one microservice directly from another" comment. While you can find a way to make testing work, you've tightly coupled the architecture. This (testing) concern is only the first of what will become many since your cart service directly ties to your catalog service. If you fix you close-coupled architecture problem, your testing problem will also be resolved.

Related

Instantiating an object in a file VS in a struct

I was reading this blog recently and I saw something interesting. The object instance is initialized in the file itself and then accessed everywhere. I found it pretty convenient and was wondering if it's the best practice.
https://dev.to/hackmamba/build-a-rest-api-with-golang-and-mongodb-gin-gonic-version-269m#:~:text=setup.go%20file%20and%20add%20the-,snippet%20below,-%3A
I'm more used to a pattern where we first create a struct like so:
type Server struct {
config util.Config
store db.Store
tokenMaker token.Maker
router *gin.Engine
}
and then set eveything in main:
func NewServer(config util.Config, store db.Store) (*Server, error) {
tokenMaker, err := token.NewPasetoMaker(config.TokenSymmetricKey)
if err != nil {
return nil, fmt.Errorf("cannot create token maker: %w", err)
}
server := &Server{
config: config,
store: store,
tokenMaker: tokenMaker,
}
server.setupRouter()
return server, nil
}
and then the server object is passed every where.
What's best? Is it okay to use the pattern mentioned in that blog?
Thank you.
I tried to implement both patterns, The pattern mentioned in the blog seems very convenient to use as I'm not passing around objects and can easily access object I'm interested in.
You can follow any one of those patterns. But, I think it's better to pass the object pointer everywhere necessary. It saves lots of work and ensures that the object is always updated.

How can I list all image URLs inside a GCP project with an API?

I'm trying to write an application in GO that will get all the image vulnerabilities inside a GCP project for me using the Container Analysis API.
The GO Client library for this API has the function findVulnerabilityOccurrencesForImage() to do this, however it requires you to pass the URL of the image you want to get the vulnerability report from in the form resourceURL := "https://gcr.io/my-project/my-repo/my-image" and the projectID. This means that if there are multiple images in your project, you have to list and store them first and only after that you can recursively call the findVulnerabilityOccurrencesForImage() function to get ALL of the vulnerabilities.
So I need a way to get and store all of the images' URLs inside all of the repos inside a given GCP project, but so far I couldn't find a solution. I can easily do that in the CLI by running gcloud container images list command but I don't see a way how that can be done with an API.
Thank you in advance for your help!
You can use the Cloud Storage package and the Objects method to do so. For example:
func GetURLs() ([]string, error) {
bucket := "bucket-name"
urls := []string{}
results := client.Bucket(bucket).Objects(context.Background(), nil)
for {
attrs, err := results.Next()
if err != nil {
if err == iterator.Done {
break
}
return nil, fmt.Errorf("iterating results: %w", err)
}
urls = append(urls, fmt.Sprint("https://storage.googleapis.com", "/", bucket, "/", attrs.Name))
}
return urls, nil
}

List custom resources from caching client with custom fieldSelector

I'm using the Operator SDK to build a custom Kubernetes operator. I have created a custom resource definition and a controller using the respective Operator SDK commands:
operator-sdk add api --api-version example.com/v1alpha1 --kind=Example
operator-sdk add controller --api-version example.com/v1alpha1 --kind=Example
Within the main reconciliation loop (for the example above, the auto-generated ReconcileExample.Reconcile method) I have some custom business logic that requires me to query the Kubernetes API for other objects of the same kind that have a certain field value. It's occurred to me that I might be able to use the default API client (that is provided by the controller) with a custom field selector:
func (r *ReconcileExample) Reconcile(request reconcile.Request) (reconcile.Result, error) {
ctx := context.TODO()
listOptions := client.ListOptions{
FieldSelector: fields.SelectorFromSet(fields.Set{"spec.someField": "someValue"}),
Namespace: request.Namespace,
}
otherExamples := v1alpha1.ExampleList{}
if err := r.client.List(ctx, &listOptions, &otherExamples); err != nil {
return reconcile.Result{}, err
}
// do stuff...
return reconcile.Result{}, nil
}
When I run the operator and create a new Example resource, the operator fails with the following error message:
{"level":"info","ts":1563388786.825384,"logger":"controller_example","msg":"Reconciling Example","Request.Namespace":"default","Request.Name":"example-test"}
{"level":"error","ts":1563388786.8255732,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"example-controller","request":"default/example-test","error":"Index with name field:spec.someField does not exist","stacktrace":"..."}
The most important part being
Index with name field:spec.someField does not exist
I've already searched the Operator SDK's documentation on the default API client and learned a bit about the inner workings of the client, but no detailed explanation on this error or how to fix it.
What does this error message mean, and how can I create this missing index to efficiently list objects by this field value?
The default API client that is provided by the controller is a split client -- it serves Get and List requests from a locally-held cache and forwards other methods like Create and Update directly to the Kubernetes API server. This is also explained in the respective documentation:
The SDK will generate code to create a Manager, which holds a Cache and a Client to be used in CRUD operations and communicate with the API server. By default a Controller's Reconciler will be populated with the Manager's Client which is a split-client. [...] A split client reads (Get and List) from the Cache and writes (Create, Update, Delete) to the API server. Reading from the Cache significantly reduces request load on the API server; as long as the Cache is updated by the API server, read operations are eventually consistent.
To query values from the cache using a custom field selector, the cache needs to have a search index for this field. This indexer can be defined right after the cache has been set up.
To register a custom indexer, add the following code into the bootstrapping logic of the operator (in the auto-generated code, this is done directly in main). This needs to be done after the controller manager has been instantiated (manager.New) and also after the custom API types have been added to the runtime.Scheme:
package main
import (
k8sruntime "k8s.io/apimachinery/pkg/runtime"
"example.com/example-operator/pkg/apis/example/v1alpha1"
// ...
)
function main() {
// ...
cache := mgr.GetCache()
indexFunc := func(obj k8sruntime.Object) []string {
return []string{obj.(*v1alpha1.Example).Spec.SomeField}
}
if err := cache.IndexField(&v1alpha1.Example{}, "spec.someField", indexFunc); err != nil {
panic(err)
}
// ...
}
When a respective indexer function is defined, field selectors on spec.someField will work from the local cache as expected.

Google Cloud Bigtable authentication with Go

I'm trying to insert a simple record as in GoDoc. But this returns,
rpc error: code = 7 desc = "User can't access project: tidy-groove"
When I searched for grpc codes, it says..
PermissionDenied Code = 7
// Unauthenticated indicates the request does not have valid
// authentication credentials for the operation.
I've enabled Big table in my console and created a cluster and a service account and recieved the json. What I'm doing wrong here?
package main
import (
"fmt"
"golang.org/x/net/context"
"golang.org/x/oauth2/google"
"google.golang.org/cloud"
"google.golang.org/cloud/bigtable"
"io/ioutil"
)
func main() {
fmt.Println("Start!")
put()
}
func getClient() *bigtable.Client {
jsonKey, err := ioutil.ReadFile("TestProject-7854ea9op741.json")
if err != nil {
fmt.Println(err.Error())
}
config, err := google.JWTConfigFromJSON(
jsonKey,
bigtable.Scope,
) // or bigtable.AdminScope, etc.
if err != nil {
fmt.Println(err.Error())
}
ctx := context.Background()
client, err := bigtable.NewClient(ctx, "tidy-groove", "asia-east1-b", "test1-bigtable", cloud.WithTokenSource(config.TokenSource(ctx)))
if err != nil {
fmt.Println(err.Error())
}
return client
}
func put() {
ctx := context.Background()
client := getClient()
tbl := client.Open("table1")
mut := bigtable.NewMutation()
mut.Set("links", "maps.google.com", bigtable.Now(), []byte("1"))
mut.Set("links", "golang.org", bigtable.Now(), []byte("1"))
err := tbl.Apply(ctx, "com.google.cloud", mut)
if err != nil {
fmt.Println(err.Error())
}
}
I've solved the problem. It's nothing wrong with the code, but config json itself. So anyone who out there want to authenticate and came here by google search... This code is correct and working perfectly. What I've done wrong is follows.
First I made a service account and got the json. But google warned me that im not an owner of project hence it wont be added to accept list but anyway it let me download the json.
Then I deleted that key from console and requested project owner to create a key for me.
There he has created another key with the same name I given.. And since he's the owner no error/warning msgs displayed and successfully json file was downloaded.
When I tried with that... my question begun. That's when i posted this question.
After that with no solutions. I asked owner to delete that key and create another key but with a different name..
Then it worked! It seems if you try to create a key with non-owner account and then again create with same name ( after deleting original of course ) has no effect. Hope this helps everyone out there :)
Take a look at: helloworld.go or search.go which uses GOOGLE_APPLICATION_CREDENTIALS environment variable.
For most environments, you no longer even need to set GOOGLE_APPLICATION_CREDENTIALS. Google Cloud Platform, Managed VMs or Google App Engine all have the right thing set for you. Your desktop environment will also be correct if you've used gcloud init or it's predecessor gcloud auth login followed by gcloud config set project <projectID>.

Datastore: Create parent and child entity in an entity group transaction?

After reading about Google Datastore concepts/theory I started using the Go datastore package
Scenario:
Kinds User and LinkedAccount require that every user has one or more linked accounts (yay 3rd party login). For strong consistency, LinkedAccounts will be children of the associated User. New User creation then involves creating both a User and a LinkedAccount, never just one.
User creation seems like the perfect use case for transactions. If, say LinkedAccount creation fails, the transaction rolls back an fails. This doesn't currently seem possible. The goal is to create a parent and then a child within a transaction.
According to docs
All Datastore operations in a transaction must operate on entities in
the same entity group if the transaction is a single group transaction
We want a new User and LinkedAccount to be in the same group, so to me it sounds like Datastore should support this scenario. My fear is that the intended meaning is that operations on existing entities in the same group can be performed in a single transaction.
tx, err := datastore.NewTransaction(ctx)
if err != nil {
return err
}
incompleteUserKey := datastore.NewIncompleteKey(ctx, "User", nil)
pendingKey, err := tx.Put(incompleteUserKey, user)
if err != nil {
return err
}
incompleteLinkedAccountKey := datastore.NewIncompleteKey(ctx, "GithubAccount", incompleteUserKey)
// also tried PendingKey as parent, but its a separate struct type
_, err = tx.Put(incompleteLinkedAccountKey, linkedAccount)
if err != nil {
return err
}
// attempt to commit
if _, err := tx.Commit(); err != nil {
return err
}
return nil
From the library source its clear why this doesn't work. PendingKey's aren't keys and incomplete keys can't be used as parents.
Is this a necessary limitation of Datastore or of the library? For those experienced with this type of requirement, did you just sacrifice the strong consistency and make both kinds global?
For Google-ability:
datastore: invalid key
datastore: cannot use pendingKey as type *"google.golang.org/cloud/datastore".Key
One thing to note is that transactions in the Cloud Datastore API can operate on up to 25 entity groups, but this doesn't answer the question of how to create two entities in the same entity group as part of a single transaction.
There are a few ways to approach this (note that this applies to any use of the Cloud Datastore API, not just the gcloud-golang library):
Use a (string) name for the parent key instead of having Datastore automatically assign a numeric ID:
parentKey := datastore.NewKey(ctx, "Parent", "parent-name", 0, nil)
childKey := datastore.NewIncompleteKey(ctx, "Child", parentKey)
Make an explicit call to AllocateIds to have the Datastore pick a numeric ID for the parent key:
incompleteKeys := [1]*datastore.Key{datastore.NewIncompleteKey(ctx, "Parent", nil)}
completeKeys, err := datastore.AllocateIDs(ctx, incompleteKeys)
if err != nil {
// ...
}
parentKey := completeKeys[0]
childKey := datastore.NewIncompleteKey(ctx, "Child", parentKey)

Resources