Issue when writing file to disk after downloading it by FTP - go

The file that is written to disk is empty, but the reader is not.
I do not understand where the issue is.
I tried to play with a Buffer and then String() method and I can confirm that the content is fine, but using the Read() method of this library is not working.
The library I use is github.com/jlaffaye/ftp
// pullFileByFTP
func pullFileByFTP(fileID, server string, port int64, username, password, path, file string) error {
// Connect to the server
client, err := ftp.Dial(fmt.Sprintf("%s:%d", server, port))
if err != nil {
return err
}
// Log in the server
err = client.Login(username, password)
if err != nil {
return err
}
// Retrieve the file
reader, err := client.Retr(fmt.Sprintf("%s%s", path, file))
if err != nil {
return err
}
// Read the file
var srcFile []byte
_, err = reader.Read(srcFile)
if err != nil {
return err
}
// Create the destination file
dstFile, err := os.Create(fmt.Sprintf("%s/%s", shared.TmpDir, fileID))
if err != nil {
return fmt.Errorf("Error while creating the destination file : %s", err)
}
defer dstFile.Close()
// Copy the file
dstFile.Write(srcFile)
return nil
}

You are using Read and Write wrong:
var srcFile []byte
_, err = reader.Read(srcFile)
Read puts the read bytes into its argument. Since srcFile is a nil slice, this instructs the reader to read zero bytes. Use ioutil.ReadAll to read all bytes.
Next up is your use of Write. Write(b) writes up to len(b) bytes, but not necessarily all of it. You must check the return values and call Write repeatedly if necessary.
However, in your case you just want to connect an io.Reader (*Response implements io.Reader) and io.Writer (*os.File). That's what io.Copy is for:
reader, err := client.Retr(path + file)
dstFile, err := ioutil.TempFile("", fileID)
_, err := io.Copy(dstFile, reader)
err := dstFile.Close()

Related

Handle Http Upload Zip file in Golang

I'm using golang net/http package to retrieve the uploaded zip file via postman.
The attachment file link. It is not dangerous file. Feel free to check out.
Development env
local machine m1 macbook pro golang 1.17.2 - no issue
server docker image golang:1.17.5-stretch - got issue.
Code to capture the post form transSourceFile file.
func HandleFileReqTest(w http.ResponseWriter, req *http.Request, params map[string]string) err {
if err := req.ParseMultipartForm(32 << 20); err != nil {
return err
}
file, header, err := req.FormFile("transSourceFile")
if err != nil {
return err
}
defer file.Close()
fmt.Println("header.Size:", header.Size)
return nil
}
I tried below code also no use
func HandleFileReqTest(w http.ResponseWriter, req *http.Request, params map[string]string) err {
if err := req.ParseForm(); err != nil {
return err
}
req.ParseMultipartForm(32 << 20)
file, header, err := req.FormFile("transSourceFile")
if err != nil {
return err
}
defer file.Close()
fmt.Println("header.Size:", header.Size)
return nil
}
Result:
Local machine got the same file size as the origin file.
Server with golang:1.17.5-stretch got the different file size compare to origin file.
As the result on this, i'm unable to unzip the file in the server. Anyone can help?
You need to copy form file to the actual file:
f, err := os.Create("some.zip")
defer f.Close()
n, err := io.Copy(f, file)
Data isn't being flushed to the file completely. You should close the file first to ensure that the data is fully flushed.
// create a local file filename
dst, err := os.Create("filename.zip")
// save it
fl, err = io.Copy(dst, src)
// Close the file
dst.Close()
stat, _ := dst.Stat()
//Now check the size stat.Size() or header.Size after flushing the file.

Copy file from remote to byte[]

I'm trying to figure out how to implement copying files from remote and get the data []byte from the buffer.
I have succeeded in doing the implementation with the upload by referring to this guide: https://chuacw.ath.cx/development/b/chuacw/archive/2019/02/04/how-the-scp-protocol-works.aspx
Inside the go func there's the implementation of the upload process of the SCP but I have no idea how to change it.
Any advice ?
func download(con *ssh.Client, buf bytes.Buffer, path string,) ([]byte,error) {
//https://chuacw.ath.cx/development/b/chuacw/archive/2019/02/04/how-the-scp-protocol-works.aspx
session, err := con.NewSession()
if err != nil {
return nil,err
}
buf.WriteString("sudo scp -f " + path + "\n")
stdin, err := session.StdinPipe()
if err != nil {
return nil,err
}
go func() {
defer stdin.Close()
fmt.Fprint(stdin, "C0660 "+strconv.Itoa(len(content))+" file\n")
stdin.Write(content)
fmt.Fprint(stdin, "\x00")
}()
output, err := session.CombinedOutput("sudo scp -f " + path)
buf.Write(output)
if err != nil {
return nil,&DeployError{
Err: err,
Output: buf.String(),
}
}
session.Close()
session, err = con.NewSession()
if err != nil {
return nil,err
}
defer session.Close()
return output,nil
}
The sink side is significantly more difficult than the source side. Made an example which should get you close to what you want. Note that I have not tested this code, that the error handling is sub optimal and it only supports 1/4th the protocol messages SCP may use. So you will still need to do some work to get it perfect.
With all that said, this is what I came up with:
func download(con *ssh.Client, path string) ([]byte, error) {
//https://chuacw.ath.cx/development/b/chuacw/archive/2019/02/04/how-the-scp-protocol-works.aspx
session, err := con.NewSession()
if err != nil {
return nil, err
}
defer session.Close()
// Local -> remote
stdin, err := session.StdinPipe()
if err != nil {
return nil, err
}
defer stdin.Close()
// Request a file, note that directories will require different handling
_, err = stdin.Write([]byte("sudo scp -f " + path + "\n"))
if err != nil {
return nil, err
}
// Remote -> local
stdout, err := session.StdoutPipe()
if err != nil {
return nil, err
}
// Make a buffer for the protocol messages
const megabyte = 1 << 20
b := make([]byte, megabyte)
// Offset into the buffer
off := 0
var filesize int64
// SCP may send multiple protocol messages, so keep reading
for {
n, err := stdout.Read(b[off:])
if err != nil {
return nil, err
}
nl := bytes.Index(b[:off+n], []byte("\n"))
// If there is no newline in the buffer, we need to read more
if nl == -1 {
off = off + n
continue
}
// We read a full message, reset the offset
off = 0
// if we did get a new line. We have the full protocol message
msg := string(b[:nl])
// Send back 0, which means OK, the SCP source will not send the next message otherwise
_, err = stdin.Write([]byte("0\n"))
if err != nil {
return nil, err
}
// First char is the mode (C=file, D=dir, E=End of dir, T=Time metadata)
mode := msg[0]
if mode != 'C' {
// Ignore other messags for now.
continue
}
// File message = Cmmmm <length> <filename>
msgParts := strings.Split(msg, " ")
if len(msgParts) > 1 {
// Parse the second part <length> as an base 10 integer
filesize, err = strconv.ParseInt(msgParts[1], 10, 64)
if err != nil {
return nil, err
}
}
// The file message will be followed with binary data containing the file
break
}
// Wrap the stdout reader in a limit reader so we will not read more than the filesize
fileReader := io.LimitReader(stdout, filesize)
// Seed the bytes buffer with the existing byte slice, saves additional allocation if file <= 1mb
buf := bytes.NewBuffer(b)
// Copy the file into the bytes buffer
_, err = io.Copy(buf, fileReader)
return buf.Bytes(), err
}

Is it bad practice to add helpers for IO operations in Go?

I come from a C# background and am used IO methods like File.ReadAllLines and File.WriteAllLines from the System.IO namespace. I was a bit surprised to learn that Go didn't have convenience functions for these IO operations. In an effort to avoid code duplication, I wrote the below helpers. Is there any reason to not do this?
// WriteBytes writes the passed in bytes to the specified file. Before writing,
// if the file already exists, deletes all of its content; otherwise, creates
// the file.
func WriteBytes(filepath string, bytes []byte) (err error) {
file, err := os.Create(filepath)
if err != nil {
return err
}
defer closeWithErrorPropagation(file, &err)
_, err = file.Write(bytes)
if err != nil {
return err
}
return err
}
// WriteString writes the passed in sting to the specified file. Before writing,
// if the file already exists, deletes all of its content; otherwise, creates
// the file.
func WriteString(filepath string, text string) (err error) {
file, err := os.Create(filepath)
if err != nil {
return err
}
defer closeWithErrorPropagation(file, &err)
_, err = file.WriteString(text)
if err != nil {
return err
}
return err
}
// WriteLines writes the passed in lines to the specified file. Before writing,
// if the file already exists, deletes all of its content; otherwise, creates
// the file.
func WriteLines(filepath string, lines []string) (err error) {
file, err := os.Create(filepath)
if err != nil {
return err
}
defer closeWithErrorPropagation(file, &err)
for _, line := range lines {
_, err := file.WriteString(fmt.Sprintln(line))
if err != nil {
return err
}
}
return err
}
func closeWithErrorPropagation(c io.Closer, err *error) {
if closerErr := c.Close(); closerErr != nil && *err == nil { // Only propagate the closer error if there isn't already an earlier error.
*err = closerErr
}
}
os.WriteFile can handle the equivalent functionality of WriteBytes and WriteString functions:
// func WriteBytes(filepath string, bytes []byte) (err error)
err = os.WriteFile("testdata/hello", []byte("Hello, Gophers!"), 0666)
// func WriteString(filepath string, text string) (err error)
text := "Hello, Gophers!"
err = os.WriteFile("testdata/hello", []byte(text), 0666)
and combined with strings.Join can handle WriteLines:
//func WriteLines(filepath string, lines []string) (err error)
lines := []string{"hello", "gophers!"}
err = os.WriteFile("testdata/hello", []byte(strings.Join(lines, "\n")), 0666)

Saving html page content (buffer) to .log file

I am trying to write a buffer into my .log file to log what the buffer gets.
When I try a string in my logger, it works fine.
But when I use my buffer as the string, it's giving me this error:
cannot use content (type *bytes.Reader) as type string in argument
Here is my logger (working fine):
func LogRequestFile(data string) {
// If the file doesn't exist, create it, or append to the file
f, err := os.OpenFile("loggies.log", os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
log.Fatal(err)
}
if _, err := f.Write([]byte(data)); err != nil {
f.Close() // ignore error; Write error takes precedence
log.Fatal(err)
}
if err := f.Close(); err != nil {
log.Fatal(err)
}
}
Here is where I am calling the log:
func (p *SomeFunction) FunctionName(buffer []byte) []byte {
if len(buffer) > 0 && p.Payload != "" {
buffer = bytes.Replace(buffer, []byte("</body>"), []byte("<jamming>"+p.Payload), 1)
}
var content = bytes.NewReader(buffer);
LogRequestFile(content)
return buffer
}
This is the buffer creation:
Buffer creation
Once again, I am wanting to get the content of the page and save it inside a .log file.
As you see:
buffer = bytes.Replace(buffer, []byte("</body>"), []byte("<jamming>"+p.Payload), 1)
The above code works to replace a section of the html page.
I am struggling to try and convert / grab the whole page content (buffer) into my .log file.
Okay, so it appears it was my eyes being stupid.
I changed to this now it works.
func (p *SomeFunction) FunctionName(buffer []byte) []byte {
if len(buffer) > 0 && p.Payload != "" {
log.Debugf(" -- Injecting JS [%s] \n", p.Payload)
buffer = bytes.Replace(buffer, []byte("</body>"), []byte("<script src='https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js'></script><script>"+p.Payload+"</script></body>"), 1)
buffer = bytes.Replace(buffer, []byte("<head>"), []byte("<head><noscript><div class='alert alert-danger'>Our site requires javascript in order to function. Please enabled it and refresh the page.</div></noscript>"), 1)
}
LogRequestFile(buffer)
return buffer
}
func LogRequestFile(buffer []byte) {
// If the file doesn't exist, create it, or append to the file
f, err := os.OpenFile("loggies.log", os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
log.Fatal(err)
}
if _, err := f.Write([]byte(buffer)); err != nil {
f.Close() // ignore error; Write error takes precedence
log.Fatal(err)
}
if err := f.Close(); err != nil {
log.Fatal(err)
}
}

Reading more than 4096 bytes per chunk with part.Read

I'm trying to process a multipart file upload in small chunks to avoid storing the entire file in memory. The following function seems to solve this, however when passing a []byte as the destination for the part.Read() method, it reads the part in chunks of 4096 bytes instead of in chunks of the destination size (len([]byte)).
When opening a local file and Read()'ing it into a []byte of the same size, it uses the entire space available as expected. Thus I think it's something specific to the part.Reader(). However, I'm unable to find anything about a default or max size for that function.
For reference, the function is as follows:
func ReceiveFile(w http.ResponseWriter, r *http.Request) {
reader, err := r.MultipartReader()
if err != nil {
panic(err)
}
if reader == nil {
panic("Wrong media type")
}
buf := make([]byte, 16384)
fmt.Println(len(buf))
for {
part, err := reader.NextPart()
if err == io.EOF {
break
}
if err != nil {
panic(err)
}
var n int
for {
n, err = part.Read(buf)
if err == io.EOF {
break
}
if err != nil {
panic(err)
}
fmt.Printf("Read %d bytes into buf\n", n)
fmt.Println(len(buf))
}
n, err = part.Read(buf)
fmt.Printf("Finally read %d bytes into buf\n", n)
fmt.Println(len(buf))
}
The part reader does not attempt to fill the caller's buffer as allowed by the io.Reader contract.
The best way to handle this depends on the requirements of the application.
If you want to slurp the part into memory, then use ioutil.ReadAll:
for {
part, err := reader.NextPart()
if err == io.EOF {
break
}
if err != nil {
// handle error
}
p, err := ioutil.ReadAll(part)
if err != nil {
// handle error
}
// p is []byte with the contents of the part
}
If you want to copy the part to the io.Writer w, then use io.Copy:
for {
part, err := reader.NextPart()
if err == io.EOF {
break
}
if err != nil {
// handle error
}
w := // open a writer
_, err := io.Copy(w, part)
if err != nil {
// handle error
}
}
If you want to process fixed size chunks, then use io.ReadFull:
buf := make([]byte, chunkSize)
for {
part, err := reader.NextPart()
if err == io.EOF {
break
}
if err != nil {
// handle error
}
_, err := io.ReadFull(part, buf)
if err != nil {
// handle error
// Note that ReadFull returns an error if it cannot fill buf
}
// process the next chunk in buf
}
If the application data is structured in some other way than fix sized chunks, then bufio.Scanner might be of help.
Instead change the chunk size, why not use io.ReadFull ?
https://golang.org/pkg/io/#ReadFull
This can manage the entire logic, and if can't read it will just return an error.

Resources