How to mock memcache in go lang for unit testing? - go

I want to mock memcache cache data in go lang to avoid authhorization
i tried with gomock but couldn't work out as i dont have any interface for it.
func getAccessTokenFromCache(accessToken string)
func TestSendData(t *testing.T) {
mockCtrl := gomock.NewController(t)
defer mockCtrl.Finish()
mockObj := mock_utils.NewMockCacheInterface(mockCtrl)
mockObj.EXPECT().GetAccessToken("abcd")
var jsonStr = []byte(`{
"devices": [
{"id": "avccc",
"data":"abcd/"
}
]
}`)
req, err := http.NewRequest("POST", "/send/v1/data",
bytes.NewBuffer(jsonStr))
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "d958372f5039e28")
rr := httptest.NewRecorder()
handler := http.HandlerFunc(SendData)
handler.ServeHTTP(rr, req)
if status := rr.Code; status != 200 {
t.Errorf("handler returned wrong status code: got %v want %v",
status, http.StatusOK)
}
expected := `{"error":"Invalid access token"}`
body, _ := ioutil.ReadAll(rr.Body)
if string(body) != expected {
t.Errorf("handler returned unexpected body: got %v want %v",
string(body), expected)
}
func SendData(w http.ResponseWriter, r *http.Request) {
accessToken := r.Header.Get(constants.AUTHORIZATION_HEADER_KEY)
t := utils.CacheType{At1: accessToken}
a := utils.CacheInterface(t)
isAccessTokenValid := utils.CacheInterface.GetAccessToken(a, accessToken)
if !isAccessTokenValid {
RespondError(w, http.StatusUnauthorized, "Invalid access token")
return
}
response := make(map[string]string, 1)
response["message"] = "success"
RespondJSON(w, http.StatusOK, response)
}
tried to mock using gomock
package mock_utils
gen mock for utils for get access controler
(1) Define an interface that you wish to mock.
(2) Use mockgen to generate a mock from the interface.
(3) Use the mock in a test:

You need to architect your code such that every such access to a service happens via an interface implementation. In your case, you should ideally create an interface like
type CacheInterface interface {
Set(key string, val interface{}) error
Get(key string) (interface{},error)
}
Your MemcacheStruct should implement this interface and all your memcache related calls should happen from there. Like in your case GetAccessToken should call cacheInterface.get(key) wherein your cacheInterface should refer to memcache implementation of this interface. This is a way better way to design your go programs and this would not only help you in writing tests but would also help in case let's say you want to use a different in memory database to help with caching. Like for ex., let's say in future if you want to use redis as your cache storage, then all you need to change is create a new implementation of this interface.

Related

Logging metrics for go grpc server

This is kindof an extension of my previous question Reuse log client in interceptor for Golang grpc server method.
Basically I have a grpc server (written in Go) that exposes three APIs:
SubmitJob
CancelJob
GetJobStatus
I am using Datadog to log metrics, so in each API, I have code like:
func (s *myServer) submitJob(ctx context.Context, request *submitJobRequest) (*submitJobResponse, error) {
s.dd_client.LogRequestCount("SubmitJob")
start_time := time.Now()
defer s.dd_client.LogRequestDuration("SubmitJob", time.Since(start_time))
sth, err:= someFunc1()
if err != nil {
s.dd_client.LogErrorCount("SubmitJob")
return nil, err
}
resp, err:= someFunc2(sth)
if err != nil {
s.dd_client.LogErrorCount("SubmitJob")
return nil, err
}
return resp, nil
}
This approach works but have several problems:
The LogRequestCount and LogRequestDuration is duplicated among all APIs
I am calling LogErrorCount in every places where errors are returned, which seems ugly
I learned that interceptor might help with logging, so I wrote an interceptor like
func (s *myServer) UnaryInterceptor(ctx context.Context,
request interface{},
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler,
) (interface{}, error) {
// Get method name e.g. SubmitJob, CancelJob, GetJobStatus
tmp := strings.Split(info.FullMethod, "/")
method := tmp[len(tmp)-1]
s.dd_client.LogRequestCount(method)
start_time := time.Now()
resp, err := handler(ctx, request)
server.dd_client.LogRequestDuration(method)
if err != nil {
s.dd_client.LogErrorCount(method)
}
return response, err
}
And set it in main() function:
server := grpc.NewServer(grpc.UnaryInterceptor(my_server.UnaryInterceptor))
This works for me, but I noticed two problems:
Here the interceptor takes myServer as a receiver, is this a good practice? I am doing this coz I want to reuse the Datadog client (dd_client) created within myServer. Other options would be create the Datadog client singleton which used by both interceptor and myServer, or create a interceptor struct and create a separate Datadog client there.
The interceptor could only handle logging for generic metrics e.g. request count, duration. But there could be metrics specific for each API, which means I still need to have logging related code in each API implementation. Then the question is, should I still use interceptor? Coz now the logging related code are splitted into two places (API implementation and interceptor).

Mock/test basic http.get request

I am leaning to write unit tests and I was wondering the correct way to unit test a basic http.get request.
I found an API online that returns fake data and wrote a basic program that gets some user data and prints out an ID:
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"log"
"net/http"
)
type UserData struct {
Meta interface{} `json:"meta"`
Data struct {
ID int `json:"id"`
Name string `json:"name"`
Email string `json:"email"`
Gender string `json:"gender"`
Status string `json:"status"`
} `json:"data"`
}
func main() {
resp := sendRequest()
body := readBody(resp)
id := unmarshallData(body)
fmt.Println(id)
}
func sendRequest() *http.Response {
resp, err := http.Get("https://gorest.co.in/public/v1/users/1841")
if err != nil {
log.Fatalln(err)
}
return resp
}
func readBody(resp *http.Response) []byte {
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Fatalln(err)
}
return body
}
func unmarshallData(body []byte) int {
var userData UserData
json.Unmarshal(body, &userData)
return userData.Data.ID
}
This works and prints out 1841. I then wanted to write some tests that validate that the code is behaving as expected, e.g. that it correctly fails if an error is returned, that the data returned can be unmarshalled. I have been reading online and looking at examples but they are all far more complex that what I feel I am trying to achieve.
I have started with the following test that ensures that the data passed to the unmarshallData function can be unmarshalled:
package main
import (
"testing"
)
func Test_unmarshallData(t *testing.T) {
type args struct {
body []byte
}
tests := []struct {
name string
args args
want int
}{
{name: "Unmarshall", args: struct{ body []byte }{body: []byte("{\"meta\":null,\"data\":{\"id\":1841,\"name\":\"Piya\",\"email\":\"priya#gmai.com\",\"gender\":\"female\",\"status\":\"active\"}}")}, want: 1841},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := unmarshallData(tt.args.body); got != tt.want {
t.Errorf("unmarshallData() = %v, want %v", got, tt.want)
}
})
}
}
Any advise on where to go from here would be appreciated.
before moving on to the testing, your code has a serious flow, which will become a problem if you don't take care about it in your future programming tasks.
https://pkg.go.dev/net/http See the second example
The client must close the response body when finished with it
Let's fix that now (we will have to come back on this subject later), two possibilities.
1/ within main, use defer to Close that resource after you have drained it;
func main() {
resp := sendRequest()
defer body.Close()
body := readBody(resp)
id := unmarshallData(body)
fmt.Println(id)
}
2/ Do that within readBody
func readBody(resp *http.Response) []byte {
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Fatalln(err)
}
return body
}
Using a defer is the expected manner to close the resource. It helps the reader to identify the lifetime span of the resource and improve readability.
Notes : I will not be using much of the table test driven pattern, but you should, like you did in your OP.
Moving on to the testing part.
Tests can be written under the same package or its fellow version with a trailing _test, such as [package target]_test. This has implications in two ways.
Using a separate package, they will be ignored in the final build. Which will help to produce smaller binaries.
Using a separate package, you test the API in a black box manner, you can access only the identifiers it explicitly exposes.
Your current tests are white boxed, meaning you can access any declaration of main, public or not.
About sendRequest, writing a test around this is not very interesting because it does too little, and your tests should not be written to test the std library.
But for the sake of the demonstration, and for good reasons we might want to not rely on external resources to execute our tests.
In order to achieve that we must make the global dependencies consumed within it, an injected dependency. So that later on, it is possible to replace the one thing it depends on to react, the http.Get method.
func sendRequest(client interface{Get() (*http.Response, error)}) *http.Response {
resp, err := client.Get("https://gorest.co.in/public/v1/users/1841")
if err != nil {
log.Fatalln(err)
}
return resp
}
Here i use an inlined interface declaration interface{Get() (*http.Response, error)}.
Now we can add a new test which injects a piece of code that will return exactly the values that will trigger the behavior we want to test within our code.
type fakeGetter struct {
resp *http.Response
err error
}
func (f fakeGetter) Get(u string) (*http.Response, error) {
return f.resp, f.err
}
func TestSendRequestReturnsNilResponseOnError(t *testing.T) {
c := fakeGetter{
err: fmt.Errorf("whatever error will do"),
}
resp := sendRequest(c)
if resp != nil {
t.Fatal("it should return a nil response when an error arises")
}
}
Now run this test and see the result. It is not conclusive because your function contains a call to log.Fatal, which in turns executes an os.Exit; We cannot test that.
If we try to change that, we might think we might call for panic instead because we can recover.
I don't recommend doing that, in my opinion, this is smelly and bad, but it exists, so we might consider. This is also the least possible change to the function signature. Returning an error would break even more the current signatures. I want to minimize this for that demonstration. But, as a rule of thumb, return an error and always check them.
In the sendRequest function, replace this call log.Fatalln(err) with panic(err) and update the test to capture the panic.
func TestSendRequestReturnsNilResponseOnError(t *testing.T) {
var hasPanicked bool
defer func() {
_ = recover() // if you capture the output value or recover, you get the error gave to the panic call. We have no use of it.
hasPanicked = true
}()
c := fakeGetter{
err: fmt.Errorf("whatever error will do"),
}
resp := sendRequest(c)
if resp != nil {
t.Fatal("it should return a nil response when an error arises")
}
if !hasPanicked {
t.Fatal("it should have panicked")
}
}
We can now move on to the other execution path, the non error return.
For that we forge the desired *http.Response instance we want to pass into our function, we will then check its properties to figure out if what the function does is inline with what we expect.
We will consider we want to ensure it is returned unmodified : /
Below test only sets two properties, and I will do it to demonstrate how to set the Body with a NopCloser and strings.NewReader as it is often needed later on using the Go language;
I also use reflect.DeepEqual as brute force equality checker, usually you can be more fine grained and get better tests. DeepEqual does the job in this case but it introduces complexity that does not justify systematic use of it.
func TestSendRequestReturnsUnmodifiedResponse(t *testing.T) {
c := fakeGetter{
err: nil,
resp: &http.Response{
Status: http.StatusOK,
Body: ioutil.NopCloser(strings.NewReader("some text")),
},
}
resp := sendRequest(c)
if !reflect.DeepEqual(resp, c.resp) {
t.Fatal("the response should not have been modified")
}
}
At that point you may have figured that this small function sendRequest is not good, if you did not I ensure you it is not. It does too little, it merely wraps the http.Get method and its testing is of little interest for the survival of the business logic.
Moving on to readBody function.
All remarks that applied for sendRequest apply here too.
it does too little
it os.Exits
One thing does not apply. As the call to ioutil.ReadAll does not rely on external resources, there is no point in attempting to inject that dependency. We can test around.
Though, for the sake of the demonstration, it is the time to talk about the missing call to defer resp.Body.Close().
Let us assume we go for the second proposition made in introduction and test for that.
The http.Response struct adequately exposes its Body recipient as an interface.
To ensure the code calls for the `Close, we can write a stub for it.
That stub will record if that call was made, the test can then check for that and trigger an error if it was not.
type closeCallRecorder struct {
hasClosed bool
}
func (c *closeCallRecorder) Close() error {
c.hasClosed = true
return nil
}
func (c *closeCallRecorder) Read(p []byte) (int, error) {
return 0, nil
}
func TestReadBodyCallsClose(t *testing.T) {
body := &closeCallRecorder{}
res := &http.Response{
Body: body,
}
_ = readBody(res)
if !body.hasClosed {
t.Fatal("the response body was not closed")
}
}
Similarly, and for the sake of the demonstration, we might want to test if the function has called for Read.
type readCallRecorder struct {
hasRead bool
}
func (c *readCallRecorder) Read(p []byte) (int, error) {
c.hasRead = true
return 0, nil
}
func TestReadBodyHasReadAnything(t *testing.T) {
body := &readCallRecorder{}
res := &http.Response{
Body: ioutil.NopCloser(body),
}
_ = readBody(res)
if !body.hasRead {
t.Fatal("the response body was not read")
}
}
We an also verify the body was not modified in betwen,
func TestReadBodyDidNotModifyTheResponse(t *testing.T) {
want := "this"
res := &http.Response{
Body: ioutil.NopCloser(strings.NewReader(want)),
}
resp := readBody(res)
if got := string(resp); want != got {
t.Fatal("invalid response, wanted=%q got %q", want, got)
}
}
We have almost done, lets move one to the unmarshallData function.
You have already wrote a test about it. It is okish, though, i would write it this way to make it leaner:
type UserData struct {
Meta interface{} `json:"meta"`
Data Data `json:"data"`
}
type Data struct {
ID int `json:"id"`
Name string `json:"name"`
Email string `json:"email"`
Gender string `json:"gender"`
Status string `json:"status"`
}
func Test_unmarshallData(t *testing.T) {
type args struct {
body []byte
}
tests := []UserData{
UserData{Data: Data{ID: 1841}},
}
for _, u := range tests {
want := u.ID
b, _ := json.Marshal(u)
t.Run("Unmarshal", func(t *testing.T) {
if got := unmarshallData(b); got != want {
t.Errorf("unmarshallData() = %v, want %v", got, want)
}
})
}
}
Then, the usual apply :
don't log.Fatal
what are you testing ? the marshaller ?
Finally, now that we have gathered all those pieces, we can refactor to write a more sensible function and re use all those pieces to help us testing such code.
I won't do it, but here is a starter, which still panics, and I still don't recommend, but the previous demonstration has shown everything needed to test a version of it that returns an error.
type userFetcher struct {
Requester interface {
Get(u string) (*http.Response, error)
}
}
func (u userFetcher) Fetch() int {
resp, err := u.Requester.Get("https://gorest.co.in/public/v1/users/1841") // it does not really matter that this string is static, using the requester we can mock the response, its body and the error.
if err != nil {
panic(err)
}
defer resp.Body.Close() //always.
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
panic(err)
}
var userData UserData
err = json.Unmarshal(body, &userData)
if err != nil {
panic(err)
}
return userData.Data.ID
}

How to serialize `LastEvaluatedKey` from DynamoDB's Golang SDK?

When working with DynamoDB in Golang, if a call to query has more results, it will set LastEvaluatedKey on the QueryOutput, which you can then pass in to your next call to query as ExclusiveStartKey to pick up where you left off.
This works great when the values stay in Golang. However, I am writing a paginated API endpoint, so I would like to serialize this key so I can hand it back to the client as a pagination token. Something like this, where something is the magic package that does what I want:
type GetDomainObjectsResponse struct {
Items []MyDomainObject `json:"items"`
NextToken string `json:"next_token"`
}
func GetDomainObjects(w http.ResponseWriter, req *http.Request) {
// ... parse query params, set up dynamoIn ...
dynamoIn.ExclusiveStartKey = something.Decode(params.NextToken)
dynamoOut, _ := db.Query(dynamoIn)
response := GetDomainObjectsResponse{}
dynamodbattribute.UnmarshalListOfMaps(dynamoOut.Items, &response.Items)
response.NextToken := something.Encode(dynamoOut.LastEvaluatedKey)
// ... marshal and write the response ...
}
(please forgive any typos in the above, it's a toy version of the code I whipped up quickly to isolate the issue)
Because I'll need to support several endpoints with different search patterns, I would love a way to generate pagination tokens that doesn't depend on the specific search key.
The trouble is, I haven't found a clean and generic way to serialize the LastEvaluatedKey. You can marshal it directly to JSON (and then e.g. base64 encode it to get a token), but doing so is not reversible. LastEvaluatedKey is a map[string]types.AttributeValue, and types.AttributeValue is an interface, so while the json encoder can read it, it can't write it.
For example, the following code panics with panic: json: cannot unmarshal object into Go value of type types.AttributeValue.
lastEvaluatedKey := map[string]types.AttributeValue{
"year": &types.AttributeValueMemberN{Value: "1993"},
"title": &types.AttributeValueMemberS{Value: "Benny & Joon"},
}
bytes, err := json.Marshal(lastEvaluatedKey)
if err != nil {
panic(err)
}
decoded := map[string]types.AttributeValue{}
err = json.Unmarshal(bytes, &decoded)
if err != nil {
panic(err)
}
What I would love would be a way to use the DynamoDB-flavored JSON directly, like what you get when you run aws dynamodb query on the CLI. Unfortunately the golang SDK doesn't support this.
I suppose I could write my own serializer / deserializer for the AttributeValue types, but that's more effort than this project deserves.
Has anyone found a generic way to do this?
OK, I figured something out.
type GetDomainObjectsResponse struct {
Items []MyDomainObject `json:"items"`
NextToken string `json:"next_token"`
}
func GetDomainObjects(w http.ResponseWriter, req *http.Request) {
// ... parse query params, set up dynamoIn ...
eskMap := map[string]string{}
json.Unmarshal(params.NextToken, &eskMap)
esk, _ = dynamodbattribute.MarshalMap(eskMap)
dynamoIn.ExclusiveStartKey = esk
dynamoOut, _ := db.Query(dynamoIn)
response := GetDomainObjectsResponse{}
dynamodbattribute.UnmarshalListOfMaps(dynamoOut.Items, &response.Items)
lek := map[string]string{}
dynamodbattribute.UnmarshalMap(dynamoOut.LastEvaluatedKey, &lek)
response.NextToken := json.Marshal(lek)
// ... marshal and write the response ...
}
(again this is my real solution hastily transferred back to the toy problem, so please forgive any typos)
As #buraksurdar pointed out, attributevalue.Unmarshal takes an inteface{}. Turns out in addition to a concrete type, you can pass in a map[string]string, and it just works.
I believe this will NOT work if the AttributeValue is not flat, so this isn't a general solution [citation needed]. But my understanding is the LastEvaluatedKey returned from a call to Query will always be flat, so it works for this usecase.
Inspired by Dan, here is a solution to serialize and deserialize to/from base64
package dynamodb_helpers
import (
"encoding/base64"
"encoding/json"
"github.com/aws/aws-sdk-go-v2/feature/dynamodb/attributevalue"
"github.com/aws/aws-sdk-go-v2/service/dynamodb/types"
)
func Serialize(input map[string]types.AttributeValue) (*string, error) {
var inputMap map[string]interface{}
err := attributevalue.UnmarshalMap(input, &inputMap)
if err != nil {
return nil, err
}
bytesJSON, err := json.Marshal(inputMap)
if err != nil {
return nil, err
}
output := base64.StdEncoding.EncodeToString(bytesJSON)
return &output, nil
}
func Deserialize(input string) (map[string]types.AttributeValue, error) {
bytesJSON, err := base64.StdEncoding.DecodeString(input)
if err != nil {
return nil, err
}
outputJSON := map[string]interface{}{}
err = json.Unmarshal(bytesJSON, &outputJSON)
if err != nil {
return nil, err
}
return attributevalue.MarshalMap(outputJSON)
}

Gokit: Validate request/payload in transport layer

I am using go-kit to create an RPC endpoint. I am creating an endpoint like this
httptransport.NewServer(
endPoint.MakeGetBlogEndPoint(blogService),
transport.DecodeGetBlogRequest,
transport.EncodeGetBlogResponse
Below is my DecodeGetBlogRequest function
func DecodeGetBlogRequest(c context.Context, r *http.Request) (interface{}, error) {
vars := mux.Vars(r)
id, err := strconv.Atoi(vars["id"])
if err != nil {
return nil, err
}
req := endPoint.GetBlogRequest{
ID: id,
}
return req, nil
}
What I want to do is validate the HTTP request in this function and if found invalid, send a response with a valid error code from here only, without passing it to the service layer. i.e. If ID is not a valid number, return 400 Bad Request response from here.
But as I don't have a ResponseWriter reference in this function, I am not sure how to do it.
I am following this example from go-kit docs
https://gokit.io/examples/stringsvc.html
Is it a valid assumption that request/payload should be validated in the transport layer only and the service layer should only be called if the request/payload is valid? If yes, how to do so in this example?
You could use ServerErrorEncoder which returns Server options (can be found in github.com/go-kit/kit/transport/server.go).
Basically in your transport layer, apart from the Decode and Encode functions, you can define an YourErrorEncoderFunc() function which could look like the following. This will catch any error thrown in the transport layer.
YourErrorEncoderFunc(_ context.Context, err error, w http.ResponseWriter).
You will need to attach this function as an option in your endpoint registration like:
ABCOpts := []httptransport.ServerOption{
httptransport.ServerErrorEncoder(YourErrorEncoderFunc),
}
r.Methods("GET").Path("/api/v1/abc/def").Handler(httptransport.NewServer(
endpoints.GetDataEndpoint,
DecodeGetRequest,
EncodeGetResponse,
ABCOpts...,
))
This will stop at transport layer if your request validation is invalid and throw and error in the http response based of whatever format you've written in YourErrorEncoderFunc().
Not 100% sure if this applies to go-kit grpc as well:
You have an error return variable. Use that to indicate there was a problem. In the go grpc module there is a status package to return errors with status codes. If you return an error with a status code, the grpc layer will take the code from the error and send it back.
For example:
func DecodeGetBlogRequest(c context.Context, r *http.Request) (interface{}, error) {
vars := mux.Vars(r)
id, err := strconv.Atoi(vars["id"])
if err != nil {
return nil, status.Error(codes.InvalidArgument, err.Error())
}
req := endPoint.GetBlogRequest{
ID: id,
}
return req, nil
}
Note also that grpc uses different status codes. In Go they are located in the codes package.

Is it a good practice to use use the same httptest server for multiple methods

I am trying to test some golang code and I have a method that calls several other methods from its body. All these methods perform some kind of operations using an elastic search client. I wanted to know whether it will be a good practice if I used a test server for testing this method that will write different responses depending upon the request method and path it received from the request that is made when the methods inside the body execute and make calls to the elasticsearch client that sends the requests to my test server?
Update:
I am testing an elasticsearch middleware. It implements a reindex service like this
type reindexService interface {
reindex(ctx context.Context, index string, mappings, settings map[string]interface{}, includes, excludes, types []string) error
mappingsOf(ctx context.Context, index string) (map[string]interface{}, error)
settingsOf(ctx context.Context, index string) (map[string]interface{}, error)
aliasesOf(ctx context.Context, index string) ([]string, error)
createIndex(ctx context.Context, name string, body map[string]interface{}) error
deleteIndex(ctx context.Context, name string) error
setAlias(ctx context.Context, index string, aliases ...string) error
getIndicesByAlias(ctx context.Context, alias string) ([]string, error)
}
I can easily test all the methods using this pattern. Creating a simple elastic search client using a httptest server url and making requests to that server
var createIndexTests = []struct {
setup *ServerSetup
index string
err string
}{
{
&ServerSetup{
Method: "PUT",
Path: "/test",
Body: `null`,
Response: `{"acknowledged": true, "shards_acknowledged": true, "index": "test"}`,
},
"test",
"",
},
// More test cases here
}
func TestCreateIndex(t *testing.T) {
for _, tt := range createIndexTests {
t.Run("Should successfully create index with a valid setup", func(t *testing.T) {
ctx := context.Background()
ts := buildTestServer(t, tt.setup)
defer ts.Close()
es, _ := newTestClient(ts.URL)
err := es.createIndex(ctx, tt.index, nil)
if !compareErrs(tt.err, err) {
t.Fatalf("Index creation should have failed with error: %v got: %v instead\n", tt.err, err)
}
})
}
}
But in case of reindex method this approach poses a problem since reindex makes calls to all the other methods inside its body. reindex looks something like this:
func (es *elasticsearch) reindex(ctx context.Context, indexName string, mappings, settings map[string]interface{}, includes, excludes, types []string) error {
var err error
// Some preflight checks
// If mappings are not passed, we fetch the mappings of the old index.
if mappings == nil {
mappings, err = es.mappingsOf(ctx, indexName)
// handle err
}
// If settings are not passed, we fetch the settings of the old index.
if settings == nil {
settings, err = es.settingsOf(ctx, indexName)
// handle err
}
// Setup the destination index prior to running the _reindex action.
body := make(map[string]interface{})
body["mappings"] = mappings
body["settings"] = settings
newIndexName, err := reindexedName(indexName)
// handle err
err = es.createIndex(ctx, newIndexName, body)
// handle err
// Some additional operations
// Reindex action.
_, err = es.client.Reindex().
Body(reindexBody).
Do(ctx)
// handle err
// Fetch all the aliases of old index
aliases, err := es.aliasesOf(ctx, indexName)
// handle err
aliases = append(aliases, indexName)
// Delete old index
err = es.deleteIndex(ctx, indexName)
// handle err
// Set aliases of old index to the new index.
err = es.setAlias(ctx, newIndexName, aliases...)
// handle err
return nil
}
For testing the reindex method I have tried mocking and DI but that turns out to be hard since the methods are defined on a struct instead of passing an interface as an argument to them. (So now I want to keep the implementation same since it would require making changes to all the plugin implementations and I want to avoid that)
I wanted to know whether I can use a modified version of my build server funtion (the one I am using is given below) to return responses for different methods for the reindex service which will write the appropriate responses based on the HTTP method and the request path that is used by that method?
type ServerSetup struct {
Method, Path, Body, Response string
HTTPStatus int
}
// This function is a modified version of: https://github.com/github/vulcanizer/blob/master/es_test.go
func buildTestServer(t *testing.T, setup *ServerSetup) *httptest.Server {
handlerFunc := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
requestBytes, _ := ioutil.ReadAll(r.Body)
requestBody := string(requestBytes)
matched := false
if r.Method == setup.Method && r.URL.EscapedPath() == setup.Path && requestBody == setup.Body {
matched = true
if setup.HTTPStatus == 0 {
w.WriteHeader(http.StatusOK)
} else {
w.WriteHeader(setup.HTTPStatus)
}
_, err := w.Write([]byte(setup.Response))
if err != nil {
t.Fatalf("Unable to write test server response: %v", err)
}
}
// TODO: remove before pushing
/*if !reflect.DeepEqual(r.URL.EscapedPath(), setup.Path) {
t.Fatalf("wanted: %s got: %s\n", setup.Path, r.URL.EscapedPath())
}*/
if !matched {
t.Fatalf("No requests matched setup. Got method %s, Path %s, body %s\n", r.Method, r.URL.EscapedPath(), requestBody)
}
})
return httptest.NewServer(handlerFunc)
}
Something like this function but it takes a map of request methods and past mapped to appropriate responses and writes them to the writer?

Resources