I have a weird issue with this code:
func PrepareFileUpload(filePath, url string) (*http.Request, error) {
pr, pw := io.Pipe()
mpw := multipart.NewWriter(pw)
go func() {
defer pw.Close()
part, err := mpw.CreateFormFile("file", filepath.Base(filePath))
if err != nil {
return
}
file, err := os.Open(filePath)
if err != nil {
return
}
defer file.Close()
if _, err = io.Copy(part, file); err != nil {
return
}
err = mpw.Close()
if err != nil {
return
}
}()
req, err := http.NewRequest("POST", url, pr)
req.Header.Set("Content-Type", mpw.FormDataContentType())
return req, err
}
which I use like this:
filePath := "foo.bar"
s := []byte("Test file")
ioutil.WriteFile(filePath, s, 0644)
values := url.Values{}
values.Set("folderid", "123456")
values.Set("filename", filepath.Base(filePath))
values.Set("nopartial", "1")
u := url.URL{
Scheme: "https",
Host: "eapi.pcloud.com",
Path: "/uploadfile",
RawQuery: values.Encode(),
}
req, err := PrepareFileUpload(filePath, u.String())
if err != nil {
log.Fatal(err)
}
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", "ACCESS_TOKEN"))
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
retData, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Fatal(err)
}
fmt.Println(string(retData))
For some reason, when used with the pCloud API, this hangs when running http.DefaultClient.Do(req). I have tried creating my own test server in Go, and there are no issues there, so I'm thinking it's some issues with the communication with the Go client and the pCloud server, but I can't figure out what it is (I've tried forcing HTTP/1.1, but no dice).
When uploading files without io.Pipe and with bytes.Buffer instead, everything is OK, but that doesn't work with large files (OOM).
The only warning I get when enabling verbose HTTP debugging is:
2022/04/21 10:43:29 http2: Transport failed to get client conn for eapi.pcloud.com:443: http2: no cached connection was available
This doesn't happen when I force HTTP/1.1, but the connection still hangs, so I'm not sure how relevant this error is.
Does anyone have any idea what could be the cause and how to fix it? Any help is appreciated.
Related
I have a Go GRPC server-side streaming function:
func (server *Server) GetClients(req *iam.GetClientsRequest, client iam.IAM_GetClientsServer) error {
ctx := client.(interface{ Context() context.Context }).Context()
userID, err := getUserIDStream(client)
if err != nil {
return err
}
clients, err := server.db.QueryByUserID(ctx, userID)
if err != nil {
return grpc.Errorf(codes.Internal, apiutils.ServerError)
}
for _, value := range clients {
converted, err := server.fromInternalClient(value)
if err != nil {
return err
}
if err := client.Send(converted); err != nil {
return err
}
}
return nil
}
and I'm testing it like this:
It("GetClients - Send fails - Error", func() {
handler := createHandler(db)
lis := bufconn.Listen(bufSize)
server := grpc.NewServer()
iam.RegisterIAMServer(server, NewServer(handler))
go func() {
if err := server.Serve(lis); err != nil {
log.Fatalf("Server exited with error: %v", err)
}
}()
defer lis.Close()
defer server.GracefulStop()
conn, err := grpc.DialContext(context.Background(), "bufnet",
grpc.WithContextDialer(createBufDialier(lis)), grpc.WithInsecure())
Expect(err).ShouldNot(HaveOccurred())
defer conn.Close()
client := iam.NewIAMClient(conn)
cclient, _ := client.GetClients(addAccessToken(context.Background()), new(iam.GetClientsRequest))
resp, err := cclient.Recv()
Expect(resp).Should(BeNil())
Expect(err).Should(HaveOccurred())
Expect(err.Error()).Should(Equal(message))
})
My issue is that I'm not sure how to induce a failure on Send so I can test the response. Since I'm using an actual test server and client, I can't just mock out the object and I'd prefer not to go that route anyway. Is there a way I can do this?
Originally, I was trying to force Send to fail by setting bufSize to an artificially low value. However, this wasn't producing an error so I decided to try modifying the maxSendMessageSize on the server:
opts := []grpc.ServerOption{}
if sendFails {
opts = append(opts, grpc.MaxSendMsgSize(10))
}
lis := bufconn.Listen(bufSize)
server := grpc.NewServer(opts...)
And this worked in producing the error.
I have a function which uploads images to remote storage via PUT request
func sendFileToStorage(path string) error {
logger.Get().Info(nil, "Got new file: " + path)
request, err := newfileUploadRequest(path)
if err != nil {
return err
}
client := &http.Client{}
resp, err := client.Do(request)
if err != nil {
return err
}
defer resp.Body.Close()
fmt.Println(resp.StatusCode)
return nil
}
func newfileUploadRequest(path string) (*http.Request, error) {
file, err := ioutil.ReadFile(path)
if err != nil {
return nil, err
}
url := config.Get().ExternalStorage.Url
req, err := http.NewRequest("PUT", url, bytes.NewBuffer(file))
if err != nil {
return nil, err
}
req.SetBasicAuth(config.Get().ExternalStorage.Login, config.Get().ExternalStorage.Password)
return req, err
}
but sometimes the file uploads not completely. I mean the file can be cut. It happens when the image is big (more than 7mb).
When I try to upload the image with postman I don't have this problem. What I am doing wrong?
Should we do this:
response, err := http.Get(url)
if err != nil {
log.Fatal(err)
}
defer response.Body.Close()
or this:
response, err := http.Get(url)
defer response.Body.Close()
if err != nil {
log.Fatal(err)
}
I guess response can be nil. But I wonder if response can be created, but still get an error?
I guess I would do something like this?
response, err := http.Get(url)
if response != nil {
defer response.Body.Close()
}
if err != nil {
log.Fatal(err)
}
Your first code block is correct
From the documentation
The client must close the response body when finished with it:
resp, err := http.Get("http://example.com/")
if err != nil {
// handle error
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
If there is an error the response is nil
I am trying to stream a file from one http endpoint to another, and avoid storing large files on disk. I thought I had working with this code, but it is creating empty files:
// out, err := os.Create(key)
resp, err := http.Get("http://source_url.com/_content/" + key)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
// now stream the file straight to the endpoint using put
req, err := http.NewRequest("PUT", "http://dest_url.com/_content/"+key, resp.Body)
if err != nil {
log.Fatal(err)
}
req.Header.Set("Content-Type", "application/octet-stream")
client := &http.Client{}
res, err := client.Do(req)
if err != nil {
log.Fatal(err)
}
fmt.Println(key, res.ContentLength, res.Status)
I'm trying to download a remote file over ssh
The following approach works fine on shell
ssh hostname "tar cz /opt/local/folder" > folder.tar.gz
However the same approach on golang giving some difference in output artifact size. For example the same folders with pure shell produce artifact gz file 179B and same with go script 178B.
I assume that something has been missed from io.Reader or session got closed earlier. Kindly ask you guys to help.
Here is the example of my script:
func executeCmd(cmd, hostname string, config *ssh.ClientConfig, path string) error {
conn, _ := ssh.Dial("tcp", hostname+":22", config)
session, err := conn.NewSession()
if err != nil {
panic("Failed to create session: " + err.Error())
}
r, _ := session.StdoutPipe()
scanner := bufio.NewScanner(r)
go func() {
defer session.Close()
name := fmt.Sprintf("%s/backup_folder_%v.tar.gz", path, time.Now().Unix())
file, err := os.OpenFile(name, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0644)
if err != nil {
panic(err)
}
defer file.Close()
for scanner.Scan() {
fmt.Println(scanner.Bytes())
if err := scanner.Err(); err != nil {
fmt.Println(err)
}
if _, err = file.Write(scanner.Bytes()); err != nil {
log.Fatal(err)
}
}
}()
if err := session.Run(cmd); err != nil {
fmt.Println(err.Error())
panic("Failed to run: " + err.Error())
}
return nil
}
Thanks!
bufio.Scanner is for newline delimited text. According to the documentation, the scanner will remove the newline characters, stripping any 10s out of your binary file.
You don't need a goroutine to do the copy, because you can use session.Start to start the process asynchronously.
You probably don't need to use bufio either. You should be using io.Copy to copy the file, which has an internal buffer already on top of any buffering already done in the ssh client itself. If an additional buffer is needed for performance, wrap the session output in a bufio.Reader
Finally, you return an error value, so use it rather than panic'ing on regular error conditions.
conn, err := ssh.Dial("tcp", hostname+":22", config)
if err != nil {
return err
}
session, err := conn.NewSession()
if err != nil {
return err
}
defer session.Close()
r, err := session.StdoutPipe()
if err != nil {
return err
}
name := fmt.Sprintf("%s/backup_folder_%v.tar.gz", path, time.Now().Unix())
file, err := os.OpenFile(name, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0644)
if err != nil {
return err
}
defer file.Close()
if err := session.Start(cmd); err != nil {
return err
}
n, err := io.Copy(file, r)
if err != nil {
return err
}
if err := session.Wait(); err != nil {
return err
}
return nil
You can try doing something like this:
r, _ := session.StdoutPipe()
reader := bufio.NewReader(r)
go func() {
defer session.Close()
// open file etc
// 10 is the number of bytes you'd like to copy in one write operation
p := make([]byte, 10)
for {
n, err := reader.Read(p)
if err == io.EOF {
break
}
if err != nil {
log.Fatal("err", err)
}
if _, err = file.Write(p[:n]); err != nil {
log.Fatal(err)
}
}
}()
Make sure your goroutines are synchronized properly so output is completeky written to the file.