How to test main function in gin application? - go

How can I test func main? Like this:
func main(){
Engine := GetEngine() // returns gin router with handlers atttached
Engine.Run(":8080")
}
It has only 2 lines but I'd like to have them covered.
TestMain' is reserved for test preparation, does that mean testing main was not planned by language creators?
I can move the contents to another function mainReal but it seems to be some over engineering?
How to test gin has started well? Can I launch main in separate goroutine, check reply and stop it?
Thanks.
P.S. Possible duplicate is not precise duplicate because it is dedicated not to testing of func main() itself, but rather ideas to move in outside and so contains different issue and approach.

Solution.
You may test function main() from package main the same way, just do not name it TestMain. I launch it as a separate goroutine, than try to connect to it and perform any request.
I decided to connect to auxilary handler which should respond with a simple json {"status": "ok"}.
In my case:
func TestMainExecution(t *testing.T) {
go main()
resp, err := http.Get("http://127.0.0.1:8080/checkHealth")
if err != nil {
t.Fatalf("Cannot make get: %v\n", err)
}
bodySb, err := ioutil.ReadAll(resp.Body)
if err != nil {
t.Fatalf("Error reading body: %v\n", err)
}
body := string(bodySb)
fmt.Printf("Body: %v\n", body)
var decodedResponse interface{}
err = json.Unmarshal(bodySb, &decodedResponse)
if err != nil {
t.Fatalf("Cannot decode response <%p> from server. Err: %v", bodySb, err)
}
assert.Equal(t, map[string]interface{}{"status": "ok"}, decodedResponse,
"Should return status:ok")
}

Related

Google PubSub and Go: create client outside or inside publish-function?

I'm new when it comes to Google PubSub(and pubsub applications in general). I'm also relatively new when it comes to Go.
I'm working on a pretty heavy backend service application that already has too many responsibilities. The service needs to fire off one message for each incoming request to a Google PubSub topic. It only needs to "fire and forget". If something goes wrong with the publishing, nothing will happen. The messages are not crucial(only used for analytics), but there will be many of them. We estimate between 50 and 100 messages per second for most of the day.
Now to the code:
func(p *publisher) Publish(message Message, log zerolog.Logger) error {
ctx := context.Background()
client, err := pubsub.NewClient(ctx, p.project)
defer client.Close()
if err != nil {
log.Error().Msgf("Error creating client: %v", err)
return err
}
marshalled, _ := json.Marshal(message)
topic := client.Topic(p.topic)
result := topic.Publish(ctx, &pubsub.Message{
Data: marshalled,
})
_, err = result.Get(ctx)
if err != nil {
log.Error().Msgf("Failed to publish message: %v", err)
return err
}
return nil
}
Disclaimer: p *publisher only contains configuration.
I wonder if this is the best way? Will this lead to the service creating and closing a client 100 times per second? If so, then I guess I should create the client once and pass it as an argument to the Publish()-function instead?
This is how the Publish()-function gets called:
defer func(publisher publish.Publisher, message Message, log zerolog.Logger) {
err := publisher.Publish(log, Message)
if err != nil {
log.Error().Msgf("Failed to publish message: %v", err)
}
}(publisher, message, logger,)
Maybe the way to go is to hold pubsubClient & pubsubTopic inside struct?
type myStruct struct {
pubsubClient *pubsub.Client
pubsubTopic *pubsub.Topic
logger *yourLogger.Logger
}
func newMyStruct(projectID string) (*myStruct, error) {
ctx := context.Background()
pubsubClient, err := pubusb.NewClient(ctx, projectID)
if err != nil {...}
pubsubTopic := pubsubClient.Topic(topicName)
return &myStruct{
pubsubClient: pubsubClient,
pubsubTopic: pubsubTopic,
logger: Logger,
// and whetever you want :D
}
}
And then for that struct create a method, which will take responsibility of marshalling the msg and sends it to Pub/sub
func (s *myStruct) request(ctx context.Context data yorData) {
marshalled, err := json.Marshal(message)
if err != nil {..}
res := s.pubsubTopic.Publish(ctx, &pubsub.Message{
Data: marshalled,
})
if _, err := res.Get(ctx); err !=nil {..}
return nil
}

Trouble figuring out data race in goroutine

I started learning go recently and I've been chipping away at this for a while now, but figured it was time to ask for some specific help. I have my program requesting paginated data from an api and because there are about 160 pages of data. Seems like a good use of goroutines, except I have race conditions and I can't seem to figure out why. It's probably because I'm new to the language, but my impressions was that params for a function are passed as a copy of the data in the function calling it unless it's a pointer.
According to what I think I know this should be making copies of my data which leaves me free to change it in the main function, but I end up request some pages multiple times and other pages just once.
My main.go
package main
import (
"bufio"
"encoding/json"
"log"
"net/http"
"net/url"
"os"
"strconv"
"sync"
"github.com/joho/godotenv"
)
func main() {
err := godotenv.Load()
if err != nil {
log.Fatalln(err)
}
httpClient := &http.Client{}
baseURL := "https://api.data.gov/ed/collegescorecard/v1/schools.json"
filters := make(map[string]string)
page := 0
filters["school.degrees_awarded.predominant"] = "2,3"
filters["fields"] = "id,school.name,school.city,2018.student.size,2017.student.size,2017.earnings.3_yrs_after_completion.overall_count_over_poverty_line,2016.repayment.3_yr_repayment.overall"
filters["api_key"] = os.Getenv("API_KEY")
outFile, err := os.Create("./out.txt")
if err != nil {
log.Fatalln(err)
}
writer := bufio.NewWriter(outFile)
requestURL := getRequestURL(baseURL, filters)
response := requestData(requestURL, httpClient)
wg := sync.WaitGroup{}
for (page+1)*response.Metadata.ResultsPerPage < response.Metadata.TotalResults {
page++
filters["page"] = strconv.Itoa(page)
wg.Add(1)
go func() {
defer wg.Done()
requestURL := getRequestURL(baseURL, filters)
response := requestData(requestURL, httpClient)
_, err = writer.WriteString(response.TextOutput())
if err != nil {
log.Fatalln(err)
}
}()
}
wg.Wait()
}
func getRequestURL(baseURL string, filters map[string]string) *url.URL {
requestURL, err := url.Parse(baseURL)
if err != nil {
log.Fatalln(err)
}
query := requestURL.Query()
for key, value := range filters {
query.Set(key, value)
}
requestURL.RawQuery = query.Encode()
return requestURL
}
func requestData(url *url.URL, httpClient *http.Client) CollegeScoreCardResponseDTO {
request, _ := http.NewRequest(http.MethodGet, url.String(), nil)
resp, err := httpClient.Do(request)
if err != nil {
log.Fatalln(err)
}
defer resp.Body.Close()
var parsedResponse CollegeScoreCardResponseDTO
err = json.NewDecoder(resp.Body).Decode(&parsedResponse)
if err != nil {
log.Fatalln(err)
}
return parsedResponse
}
I know another issue I will be running into is writing to the output file in the correct order, but I believe using channels to tell each routine what request finished writing could solve that. If I'm incorrect on that I would appreciate any advice on how to approach that as well.
Thanks in advance.
goroutines do not receive copies of data. When the compiler detects that a variable "escapes" the current function, it allocates that variable on the heap. In this case, filters is one such variable. When the goroutine starts, the filters it accesses is the same map as the main thread. Since you keep modifying filters in the main thread without locking, there is no guarantee of what the goroutine sees.
I suggest you keep filters read-only, create a new map in the goroutine by copying all items from the filters, and add the "page" in the goroutine. You have to be careful to pass a copy of the page as well:
go func(page int) {
flt:=make(map[string]string)
for k,v:=range filters {
flt[k]=v
}
flt["page"]=strconv.Itoa(page)
...
} (page)

why is fasthttp like single process?

requestHandler := func(ctx *fasthttp.RequestCtx) {
time.Sleep(time.Second*time.Duration(10))
fmt.Fprintf(ctx, "Hello, world! Requested path is %q", ctx.Path())
}
s := &fasthttp.Server{
Handler: requestHandler
}
if err := s.ListenAndServe("127.0.0.1:82"); err != nil {
log.Fatalf("error in ListenAndServe: %s", err)
}
multiple request,and it cost time like X*10s.
fasthttp is single process?
after two days...
I am sorry for this question,i describe my question not well.My question is caused by the browser,the browser request the same url by synchronization, and it mislead me, it make think the fasthttp web server hanlde the request by synchronization.
I think instead of fasthttp is single process?, you're asking whether fasthttp handles client requests concurrently or not?
I'm pretty sure that any server (including fasthttp) package will handle client requests concurrently. You should write a test/benchmark instead of manually access the server through several browsers. The following is an example of such test code:
package main_test
import (
"io/ioutil"
"net/http"
"sync"
"testing"
"time"
)
func doRequest(uri string) error {
resp, err := http.Get(uri)
if err != nil {
return err
}
defer resp.Body.Close()
_, err = ioutil.ReadAll(resp.Body)
if err != nil {
return err
}
return nil
}
func TestGet(t *testing.T) {
N := 1000
wg := sync.WaitGroup{}
wg.Add(N)
start := time.Now()
for i := 0; i < N; i++ {
go func() {
if err := doRequest("http://127.0.0.1:82"); err != nil {
t.Error(err)
}
wg.Done()
}()
}
wg.Wait()
t.Logf("Total duration for %d concurrent request(s) is %v", N, time.Since(start))
}
And the result (in my computer) is
fasthttp_test.go:42: Total duration for 1000 concurrent request(s) is 10.6066411s
You can see that the answer to your question is No, it handles the request concurrently.
UPDATE:
In case the requested URL is the same, your browser may perform the request sequentially. See Multiple Ajax requests for same URL. This explains why the response times are X*10s.
I am sorry for this question,i describe my question not well.My question is caused by the browser,the browser request the same url by synchronization, and it mislead me, it make think the fasthttp web server hanlde the request by synchronization.

Split function into 2 function for test coverage

How can I test the error for ioutil.ReadAll(rep.Body)? Do I need to split my function in two, one which will make the request, and another one which will read the body and return the bytes and error?
func fetchUrl(URL string) ([]bytes, error) {
resp, err := http.Get(URL)
if err != nil {
return nil, err
}
body, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
return nil, err
}
return body, nil
}
Do I need to split my function in two, one which will make the request, and another one which will read the body and return the bytes and error?
The first one is called http.Get and the other one ioutil.ReadAll, so I don't think there's anything to split. You just created a function that uses two other functions together which you should assume are working correctly. You could even simplify your function to make it more obvious:
func fetchURL(URL string) ([]byte, error) {
resp, err := http.Get(URL)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return ioutil.ReadAll(resp.Body)
}
If you want to test anything is your fetchURL function using http.Get and ioutil.ReadAll together. I wouldn't personally bother to test it directly, but if you insist on it, you can overwrite http.DefaultTransport for a single test and provide your own, which returns http.Response with body implementing some error scenarios (e.g. and error during body read).
Here is the sketch idea:
type BrokenTransport struct {
}
func (*BrokenTransport) RoundTrip(*http.Request) (*http.Response, error) {
// Return Response with Body implementing specific error behaviour
}
http.DefaultTransport = &BrokenTransport{}
// http.Get will now use your RoundTripper.
// You should probably restore http.DefaultTransport after the test.
Basically yes, unless you're using net/http/httptest or a similar way to mock your HTTP server when testing.
But the question is: what would you really be testing? That ioutil.ReadAll() detects errors? But I'm sure this was already covered by the test suite of the Go's stdlib.
Hence I'd say that in this particular case you're about to test for the testing's sake. IMO for such trivial cases it's better to concentrate on how the fetched result is further processed.

How to make a websocket client wait util the server is running?

I want to create a websocket client that waits until the server is running. If the connection is closed by the server it should reconnect.
What I tried does not work and my code exits with a runtime error:
panic: runtime error: invalid memory address or nil pointer dereference
func run() {
origin := "http://localhost:8080/"
url := "ws://localhost:8080/ws"
ws, err := websocket.Dial(url, "", origin)
if err != nil {
fmt.Println("Connection fails, is being re-connection")
main()
}
if _, err := ws.Write([]byte("something")); err != nil {
log.Fatal(err)
}
}
Your example looks like a code snippet. It's difficult to say why you're getting that error without seeing all the code. As were pointed out in the comments to your post, you can't call main() again from your code and including the line numbers from the panic report would be helpful as well.
Usually minimizing your program to a minimal case that anyone can run and reproduce the error is the fastest way to get help. I've reconstructed yours for you in such fashion. Hopefully you can use it to fix your own code.
package main
import (
"websocket"
"fmt"
"log"
"time"
)
func main() {
origin := "http://localhost:8080/"
url := "ws://localhost:8080/ws"
var err error
var ws *websocket.Conn
for {
ws, err = websocket.Dial(url, "", origin)
if err != nil {
fmt.Println("Connection fails, is being re-connection")
time.Sleep(1*time.Second)
continue
}
break
}
if _, err := ws.Write([]byte("something")); err != nil {
log.Fatal(err)
}
}
To run this, just copy it into a file called main.go on your system and then run:
go run main.go

Resources