I use github.com/pkg/sftp for work with a sftp server in golang.
I want to download file from sftp server.
For that i need to get bytes of this file and copy it to a local file right?
First i get my file with a OpenFile function :
file, err := sc.OpenFile("/backup/" + serverid + "/" + f.Name())
if err != nil {
fmt.Fprintf(os.Stderr, "Unable to open file: %v\n", err)
return err
}
myfiles, err := file.HERE()
os.WriteFile("/text.txt", myfiles, perm)
return nil
But after i need to get bytes of this file but how i can do that?
What should i enter instead of HERE?
Resolved with:
myfile, err := io.ReadAll(file)
Related
Below I have a snippet of my code which collects some gzip compressed PDF files.
I want to add the PDF's to a tar.gz file, but before adding them they need to be uncompressed (gzip). Don't want to end up with a tar.gz filled with pdf.gz files
Need to decompress it without reading the entire file into memory. The PDF files in the tar.gz are clipped and corrupted. When I compare the tar.gz files with the original PDF files the look equal except the tar.gz files are clipped. The last part of each file is missing
// Create new gz writer with compression level 1
gzw, _ := gzip.NewWriterLevel(w, 1)
defer gzw.Close()
// Create new tar writer
tw := tar.NewWriter(gzw)
defer tw.Close()
file_path := "path-to-file.pdf.gz"
file_name := "filename-shown-in-tar.pdf"
// Open file to add to tar
fp, err := os.Open(file_path)
if err != nil {
log.Printf("Error: %v", err)
}
defer fp.Close()
file_name := file[1]+file_ext
info, err := fp.Stat()
if err != nil {
log.Printf("Error: %v", err)
}
header, err := tar.FileInfoHeader(info, file_name)
if err != nil {
log.Printf("Error: %v", err)
}
header.Name = file_name
tw.WriteHeader(header)
// This part will write the *.pdf.gz files directly to the tar.gz file
// This part works and it's possible to both open the tar.gz file and
// afterwards open the individuel pdf.gz files
//io.Copy(tw, fp)
// This part decode the gz before adding, but it clips the pdf files in
// the tar.gz file
gzr, err := gzip.NewReader(fp)
if err != nil {
log.Printf("Error: %v", err)
}
defer gzr.Close()
io.Copy(tw, gzr)
update
Got a suggestion from a comment, but now the PDF files inside the tar can't be opened. The tar.gz file is created and can be opened, but the PDF files inside are corrupted
Have tried to compare output files from the tar.gz with the original PDF. It looks like the corrupted file is missing the last bit of the file.
In one example the original file has 498 lines and the corrupted has only 425. But it looks like the 425 lines are equal to the original. Somehow the last bit is just clipped
The issue appears to be that you're setting the file info header based on the original file, which is compressed. In particular, it is the size that is causing problems - if you attempt to write in excess of the size indicated by the Size value in the header, archive/tar.Writer.Write() will return ErrWriteTooLong - see https://github.com/golang/go/blob/d5efd0dd63a8beb5cc57ae7d25f9c60d5dea5c65/src/archive/tar/writer.go#L428-L429
Something like the following should work, whereby the file is uncompressed and read so an accurate size can be established:
// Open file to add to tar
fp, err := os.Open(file_path)
if err != nil {
log.Printf("Error: %v", err)
}
defer fp.Close()
gzr, _ := gzip.NewReader(fp)
if err != nil {
panic(err)
}
defer gzr.Close()
data, err := io.ReadAll(gzr)
if err != nil {
log.Printf("Error: %v", err)
}
// Create tar header for file
header := &tar.Header{
Name: file_name,
Mode: 0600,
Size: int64(len(data)),
}
// Write header to the tar
if err = tw.WriteHeader(header); err != nil {
log.Printf("Error: %v", err)
}
// Write the file content to the tar
if _, err = tw.Write(data); err != nil {
log.Printf("Error: %v", err)
}
I have the following piece of code which creates an output file on a local drive and required to do the same on a network mapped drive let's call it [H:].
The file name (full path name) entered from command line as argument[1].
I am using Windows 10/Server 2016
// The following will create and append to the file when required.
sourcefile, errf := os.OpenFile(os.Args[1], s.O_CREATE|os.O_APPEND|os.O_RDWR, 0666)
if erro != nil {
panic(erro)
}
defer outfile.Close()
I use the following function to write a map into this file.
func map2Linpro(inp map[string][]string, outfile io.Writer) {
for k, v := range inp {
_, err := fmt.Fprintf(outfile, "%s %s=%s %s\n", v[0], k, v[1], v[2])
if err != nil {
fmt.Println("Error Writing to File: ", err)
}
}
}
Everything is working just fine if the output file is on the local Drive, but when using full path with the Mapped Drive letter, I received the following error:
Error: write h://00_sc//dest01.txt: The parameter is incorrect.
I searched for any reason, but could not find one.
I would appreciate if someone help
The following is the Error I got after adding Panic(erro) after OpenFile.
Which proves that the error source is fmt.Fprintf
Error Writing to File: write H:/00_sc/dest01.txt: The parameter is incorrect.
Thanks to all.
outfile, _ := os.OpenFile(os.Args[2], os.O_CREATE|os.O_APPEND, 0666)
should read
outfile, err := os.OpenFile(os.Args[2], os.O_CREATE|os.O_APPEND, 0666)
if err != nil {
panic(err)
}
rewrite those lines and the resulting error message should give a clue as to the cause
I could read the file after using functions Open() or OpenFile(path, os.O_RDONLY), but I could not remove the file after it. So I tried to open the file with the write flag os.RDWR like below code to see if I can remove the file. However, using os.RDWR I couldn't even read the file. Could anyone explain it to me the reason why it would lead to this issue? I have got the error sftp: "Permission denied" (SSH_FX_PERMISSION_DENIED)
I have tried to see the permission code of the file, the file is -rwxrwxrwx.
import (github.com/pkg/sftp)
config = sftp.NewConfig(nil)
config.SetAcct("xxxxx","xxxxx")
config.SetDes("ip address", 1234)
config.Connect()
if file, err = config.Client.OpenFile(path, os.O_RDWR); err != nil {
log.Println("Cannot open "+path+" , err:", err)
}
if _, err = ioutil.ReadAll(file); err != nil {
log.Println("Cannot read "+path+", err:", err)
}
file.Close()
err = config.Client.Remove(file)
if err != nil {
log.Println("cannot remove file)
}
Problem solved:
found out that I had opened the file without closing it. And somehow the file is still opened by FreeSSHDService. That's why I could not remove the file.
You have to provide the file path instead you have provided file handler.
config.Client.Remove(pathTofile)
defer file.close()
Here is the reference https://godoc.org/github.com/pkg/sftp#Client.Remove
I'm using Go Gob to transfer large files (~ 1 GB) or many small files (~ 30 MB). Server is running in a loop, and will receive files when clients send it.
My code is working if I send one large file, or few small files, but when sending a large file for the second time, it returns a 'fatal error: runtime: out of memory'. If I send a large file, stops the program, then starts again and send another large file, it works.
It looks after receiving file via Gob and writing into a file, it is not releasing memory.
Server code
type FileGob struct {
FileName string
FileLen int
FileContent []byte
}
func handleConnection(conn net.Conn) {
transf := &FileGob{}
dec := gob.NewDecoder(conn)
err := dec.Decode(transf) // file from conn to var transf
if err != nil {
fmt.Println("error to decode into buffer:", err)
}
conn.Close()
file, err := os.Create("sent_" + transf.FileName)
if err != nil {
fmt.Println("error to create file:", err)
}
file.Write(transf.FileContent) // writes into file
fileStat, err := file.Stat()
if err != nil {
fmt.Println("error to get File Stat:", err)
}
file.Close()
fmt.Printf("File %v was transferred\n", transf.FileName)
fmt.Printf("Transferred: %d, Expected: %d\n", fileStat.Size(), transf.FileLen)
}
I am trying to read gzip files using compress/gzip. I am using http.DetectContentType as I do not know if I get a normal txt file or a gzipped one. My code is very straight forward and as below:
f, err := os.Open(fullpath)
if err != nil {
log.Panicf("Can not open file %s: %v", fullpath, err)
return ""
}
defer f.Close()
buff := make([]byte, 512)
_, err = f.Read(buff)
if err != nil && err != io.EOF{
log.Panicf("Cannot read buffer %v", err);
return ""
}
switch filetype := http.DetectContentType(buff); filetype {
case "application/x-gzip":
log.Println("File Type is", filetype)
reader, err := gzip.NewReader(f)
if err != nil && err != io.EOF{
log.Panicf("Cannot read gzip archive %v", err);
return ""
}
defer reader.Close()
target := "/xx/yy/abcd.txt"
writer, err := os.Create(target)
if err != nil {
log.Panicf("Cannot write unarchived file %v", err);
return ""
}
defer writer.Close()
_, err = io.Copy(writer, reader)
return target
The problem is that the gzip reader always errors out saying "Cannot read gzip archive gzip: invalid header" I have tried the zlib library too but in vain. I gzipped the source file in mac using the command line gzip tool.
Please show me where I am going wrong.
You're reading the first 512 bytes of the file, so the gzip.Reader won't ever see that. Since these are regular files, you can seek back to the start after a successful Read:
f.Seek(0, os.SEEK_SET)