How to get response after http request - go

I'm studying Go and am a real newbie in this field.
I am facing a problem when I try to copy some value.
What I am doing is:
I want to get some response in [response] using httpRequest.
httpClient := &http.Client{}
response, err := httpClient.Do(req)
if err != nil {
panic(err)
}
After that, I want to save the stored value in response at 'origin.txt'
origin_ ,_:= ioutil.ReadAll(response.Body)
f_, err := os.Create("origin.txt")
f_.Write(origin_);
And I want to get a specific value by using goquery package.
doc, err := goquery.NewDocumentFromReader(response.Body)
if err != nil {
log.Fatal(err)
}
doc.Find(".className").Each(func(i int, s *goquery.Selection) {
w.WriteString("============" + strconv.Itoa(i) + "============")
s.Find("tr").Each(func(i int, s_ *goquery.Selection) {
fmt.Println(s_.Text())
w.WriteString(s_.Text())
})
}
)
But in this case, I can get a value exactly what I want from 2) but cannot get anything from 3).
At first, I think the problem is, the response object at 3) is affected by 2) action. Because it is a reference object.
So I tried to copy it to another object and then do it again.
origin := *response
but, I got the same result as first.
What should I do?
How can I assign a reference value to another one by its value?
Should I request it twice for each attempt?

I actually don't see where you use shared resources between 2 and 3.
However that being said origin := *response won't buy you much. The data (response.Body) is a io.ReadCloser. The ioutil.ReadAll() will consume and store all the data that the stream has. You only get to do this once.
However you have the data stored in origin. If you need another io.Reader for that data (say for case 3), then you can make that byte slice look like an io.Reader again: bytes.NewReader(origin).

Related

why *(*string)(unsafe.Pointer(&b)) doesn't work with bufio.Reader

i have a file. it has some ip
1.1.1.0/24
1.1.2.0/24
2.2.1.0/24
2.2.2.0/24
i read this file to slice, and used *(*string)(unsafe.Pointer(&b)) to parse []byte to string, but is doesn't work
func TestInitIpRangeFromFile(t *testing.T) {
filepath := "/tmp/test"
file, err := os.Open(filepath)
if err != nil {
t.Errorf("failed to open ip range file:%s, err:%s", filepath, err)
}
reader := bufio.NewReader(file)
ranges := make([]string, 0)
for {
ip, _, err := reader.ReadLine()
if err != nil {
if err == io.EOF {
break
}
logger.Fatalf("failed to read ip range file, err:%s", err)
}
t.Logf("ip:%s", *(*string)(unsafe.Pointer(&ip)))
ranges = append(ranges, *(*string)(unsafe.Pointer(&ip)))
}
t.Logf("%v", ranges)
}
result:
task_test.go:71: ip:1.1.1.0/24
task_test.go:71: ip:1.1.2.0/24
task_test.go:71: ip:2.2.1.0/24
task_test.go:71: ip:2.2.2.0/24
task_test.go:75: [2.2.2.0/24 1.1.2.0/24 2.2.1.0/24 2.2.2.0/24]
why 1.1.1.0/24 changed to 2.2.2.0/24 ?
change
*(*string)(unsafe.Pointer(&ip))
to string(ip) it works
So, while reinterpreting a slice-header as a string-header the way you did is absolutely bonkers and has no guarantee whatsoever of working correctly, it's only indirectly the cause of your problem.
The real problem is that you're retaining a pointer to the return value of bufio/Reader.ReadLine(), but the docs for that method say "The returned buffer is only valid until the next call to ReadLine." Which means that the reader is free to reuse that memory later on, and that's what's happening.
When you do the cast in the proper way, string(ip), Go copies the contents of the buffer into the newly-created string, which remains valid in the future. But when you type-pun the slice into a string, you keep the exact same pointer, which stops working as soon as the reader refills its buffer.
If you decided to do the pointer trickery as a performance hack to avoid copying and allocation... too bad. The reader interface is going to force you to copy the data out anyway, and since it does, you should just use string().

Multiple queries to Postgres within the same function

I'm new to Go, so sorry for the silly question in advance!
I'm using Gin framework and want to make multiple queries to the database within the same handler (database/sql + lib/pq)
userIds := []int{}
bookIds := []int{}
var id int
/* Handling first query here */
rows, err := pgClient.Query(getUserIdsQuery)
defer rows.Close()
if err != nil {
return
}
for rows.Next() {
err := rows.Scan(&id)
if err != nil {
return
}
userIds = append(userIds, id)
}
/* Handling second query here */
rows, err = pgClient.Query(getBookIdsQuery)
defer rows.Close()
if err != nil {
return
}
for rows.Next() {
err := rows.Scan(&id)
if err != nil {
return
}
bookIds = append(bookIds, id)
}
I have a couple of questions regarding this code (any improvements and best practices would be appreciated)
Does Go properly handle defer rows.Close() in such a case? I mean I have reassignment of rows variable later down the code, so will compiler track both and properly close at the end of a function?
Is it ok to reuse id shared var or should I redeclare it while iterating within rows.Next() loop?
What's the better approach of having even more queries within one handler? Should I have some kind of Writer that accepts query and slice and populate it with ids retrieved?
Thanks.
I've never worked with go-pg library, and my answer is mostly focused on the other stuff, which are generic, and are not specific to golang or go-pg.
Regardless of the fact that the rows here has the same reference while being shared between 2 queries (so one rows.Close() call would suffice, unless the library has some special implementation), defining two variables is cleaner, like userRows and bookRows.
Although I already said that I have not worked with go-pg, I believe that you wont need to iterate through rows and scan the id for all the rows manually, I believe that the lib has provided some API like this (based on the quick look on the documentations):
userIds := []int{}
err := pgClient.Query(&userIds, "select id from users where ...", args...)
Regarding your second question, it depends on what you mean by "ok". Since your doing some synchronous iteration, I don't think it would result into bugs, but when it comes to coding style, personally, I wouldn't do this.
I think that the best thing to do in your case is this:
// repo layer
func getUserIds(args whatever) ([]int, err) {...}
// these can be exposed, based on your packaging logic
func getBookIds(args whatever) ([]int, err) {...}
// service layer, or wherever you want to aggregate both queries
func getUserAndBookIds() ([]int, []int, err) {
userIds, err := getUserIds(...)
// potential error handling
bookIds, err := getBookIds(...)
// potential error handling
return userIds, bookIds, nil // you have done err handling earlier
}
I think this code is easier to read/maintain. You won't face the variable reassignment and other issues.
You can take a look at the go-pg documentations for more details on how to improve your query.

Trying to detect if multiple files are present in multipart/form-data request, and rejecting multiple attachments

I am building an API using Go that needs to store a file that is sent in a multipart form request. It also needs to return an error if more than one file is attached, and the files do not have key values attached. I'm running into an issue where the Part for the Multipart Reader changes upon iteration. So I can either successfully upload the first file but not return the error, or it returns an error when needed, but when a valid request comes in - it iterates past it and uploads nothing.
I have written a couple for loops trying this, and some without.
i := 0
var data io.Reader
for part, err := reader.NextPart(); err != io.EOF; part, err := reader.NextPart() {
i++
data = part
}
if i > 1 {
return nil, errors.New("too many files")
}
req := storeRequest{
Data: data,
FileNAme: r.URL.Path,
}
return req, nil
Any suggestions on how I could handle this? Thanks in advance.

Json issue in getting date in golang dropbox library

I'm writing a little program using the dropbox api to learn go. I'm using the client library here: https://github.com/stacktic/dropbox.
I'm able to upload and download a file so I know my api keys and what not are working correctly. Using the Metadata method I can get the metadata for a file. However, when I try to use the UnmarshalJSON method to get a human readable date from the ClientMtime item in the entry struct, I get "unexpected end of JSON input". Any ideas on what's the issue?
The code I'm using is as follows:
func main() {
db := dropbox.NewDropbox()
db.SetAppInfo("Blah", "blah")
db.SetAccessToken("Token")
list,err := db.Metadata("/app_folder/test.jpg", true, false, "", "", 1)
if err != nil {
log.Fatal(err)
}
var date []byte
err = list.ClientMtime.UnmarshalJSON(date)
if err != nil {
log.Fatal(err)
}
fmt.Printf("%v", date)
}
Thanks!
You want:
date, err := list.ClientMtime.MarshalJSON()
UnmarshalJson goes the other way; []byte -> DBTime
That's why it's an end of input error, the []byte is empty.
Optionally, ClientMTime is a time. Time which has String() and Format() methods.
You can access all the time formatting features by converting it.
See: https://github.com/stacktic/dropbox/blob/master/dropbox.go#L158

Accessing array post params in golang (gin)

I'm attempting to access an array of files and values posted to an API written in Gin (golang). I've got a function which takes a file, height and width. It then calls functions to resize the file, and then upload it to S3. However, I'm attempting to also upload multiple files.
func (rc *ResizeController) Resize(c *gin.Context) {
file, header, err := c.Request.FormFile("file")
filename := header.Filename
if err != nil {
log.Fatal(err)
}
height := c.PostForm("height")
width := c.PostForm("width")
finalFile := rc.Crop(height, width, file)
go rc.Upload(filename, finalFile, "image/jpeg", s3.BucketOwnerFull)
c.JSON(200, gin.H{"filename": filename})
}
I couldn't see anywhere in the docs how to access data in the following format:
item[0]file
item[0]width
item[0]height
item[1]file
item[1]width
item[1]height
etc.
I figured something along the lines of:
for index, element := range c.Request.PostForm("item") {
fmt.Println(element.Height)
}
But that threw "c.Request.Values undefined (type *http.Request has no field or method Values)"
You can access the File slice directly instead of using the FormFile method on Request. Assuming you have a form array for width and height that correspond to the order that the files were uploaded.
if err := ctx.Request.ParseMultipartForm(32 << 20); err != nil {
// handle error
}
for i, fh := range ctx.Request.MultipartForm.File["item"] {
// access file header using fh
w := ctx.Request.MultipartForm.Value["width"][i]
h := ctx.Request.MultipartForm.Value["height"][i]
}
The FormFile method on Request is just a wrapper around MultipartForm.File that returns the first file at that key.

Resources