can't understand code about go's print function - go

i am new to golang, when i read the code example of package "archtive/tar",i read some code like this:
// Iterate through the files in the archive.
for {
hdr, err := tr.Next()
if err == io.EOF {
// end of tar archive
break
}
if err != nil {
log.Fatalln(err)
}
fmt.Printf("Contents of %s:\n", hdr.Name)
if _, err := io.Copy(os.Stdout, tr); err != nil {
log.Fatalln(err)
}
fmt.Println()
}
the output just like this:
Contents of readme.txt:
This archive contains some text files.
Contents of gopher.txt:
Gopher names:
George
Geoffrey
Gonzo
Contents of todo.txt:
Get animal handling license.
can anyone tell me how the programe print the body of the struct? thank you.

You left out a vital piece of the example, the two lines preceding what you posted.
// Open the tar archive for reading.
r := bytes.NewReader(buf.Bytes())
tr := tar.NewReader(r)
This creates a tar.Reader which implements io.Reader. The statement io.Copy(os.Stdout, tr) in the if statement knows how to copy the contents of the reader to Stdout.
Godoc for tar.Reader
Also might be useful to note that the code example in the package documentation doesn't ever write the tar it creates to disk. It is all done in memory using bytes.Buffers. Examples of writing to disk would be in the io package.

Related

How to extract .7z files in Go

I have a 7z archive of a number of .txt files. I am trying to list all the files in the archive and upload them to an s3 bucket. But I'm having trouble with extracting .7z archives on Go. To do this, I found a package github.com/gen2brain/go-unarr (imported as extractor) and this is what I have so far
content, err := ioutil.ReadFile("sample_archive.7z")
if err != nil {
fmt.Printf("err: %+v", err)
}
a, err := extractor.NewArchiveFromMemory(content)
if err != nil {
fmt.Printf("err: %+v", err)
}
lst, _ := a.List()
fmt.Printf("lst: %+v", last)
This prints a list of all the files in the archive. But this has two issues.
It reads files from local using ioutil and the input of NewArchiveFromMemory must be of type []byte. But I can't read from local and will have to use a file from memory of type os.file. So I will either have to find a different method or convert the os.file to []byte. There's another method NewArchiveFromReader(r io.Reader). But this is returning an error saying Bad File Descriptor.
file, err := os.OpenFile(
path,
os.O_WRONLY|os.O_TRUNC|os.O_CREATE,
0666,
)
a, err := extractor.NewArchiveFromReader(file)
if err != nil {
fmt.Printf("ERROR: %+v", err)
}
lst, _ := a.List()
fmt.Printf("files: %+v\n", lst)
I am able to get the list of the files in the archive. And using Extract(destinaltion_path string), I can also extract it to a local directory. But I want the extracted files also in os.file format ( ie. a list of os.file since there will be multiple files ).
How can I change my current code to achieve both the above targets? Is there any other library to do this?
os.File implements the io.Reader interface (because it has a Read([]byte) (int, error) method defined), so you can use NewArchiveFromReader(file) without any conversions needed. You can read up on Go interfaces for more background on why that works.
If you're okay with extracting to a local directory, you can do that and then read the files back in (warning, may contain typos):
func extractAndOpenAll(*extractor.Archive) ([]*os.File, error) {
err := a.Extract("/tmp/path") // consider using ioutil.TempDir()
if err != nil {
return nil, err
}
filestats, err := ioutil.ReadDir("/tmp/path")
if err != nil {
return nil, err
}
# warning: all these file handles must be closed by the caller,
# which is why even the error case here returns the list of files.
# if you forget, your process might leak file handles.
files := make([]*os.File, 0)
for _, fs := range(filestats) {
file, err := os.Open(fs.Name())
if err != nil {
return files, err
}
files = append(files, file)
}
return files, nil
}
It is possible to use the archived files without writing back to disk (https://github.com/gen2brain/go-unarr#read-all-entries-from-archive), but whether or not you should do that instead depends on what your next step is.

golang zlib reader output not being copied over to stdout

I've modified the official documentation example for the zlib package to use an opened file rather than a set of hardcoded bytes (code below).
The code reads in the contents of a source text file and compresses it with the zlib package. I then try to read back the compressed file and print its decompressed contents into stdout.
The code doesn't error, but it also doesn't do what I expect it to do; which is to display the decompressed file contents into stdout.
Also: is there another way of displaying this information, rather than using io.Copy?
package main
import (
"compress/zlib"
"io"
"log"
"os"
)
func main() {
var err error
// This defends against an error preventing `defer` from being called
// As log.Fatal otherwise calls `os.Exit`
defer func() {
if err != nil {
log.Fatalln("\nDeferred log: \n", err)
}
}()
src, err := os.Open("source.txt")
if err != nil {
return
}
defer src.Close()
dest, err := os.Create("new.txt")
if err != nil {
return
}
defer dest.Close()
zdest := zlib.NewWriter(dest)
defer zdest.Close()
if _, err := io.Copy(zdest, src); err != nil {
return
}
n, err := os.Open("new.txt")
if err != nil {
return
}
r, err := zlib.NewReader(n)
if err != nil {
return
}
defer r.Close()
io.Copy(os.Stdout, r)
err = os.Remove("new.txt")
if err != nil {
return
}
}
Your defer func doesn't do anything, because you're shadowing the err variable on every new assignment. If you want a defer to run, return from a separate function, and call log.Fatal after the return statement.
As for why you're not seeing any output, it's because you're deferring all the Close calls. The zlib.Writer isn't flushed until after the function exits, and neither is the destination file. Call Close() explicitly where you need it.
zdest := zlib.NewWriter(dest)
if _, err := io.Copy(zdest, src); err != nil {
log.Fatal(err)
}
zdest.Close()
dest.Close()
I think you messed up the code logic with all this defer stuff and your "trick" err checking.
Files are definitively written when flushed or closed. You just copy into new.txt without closing it before opening it to read it.
Defering the closing of the file is neat inside a function which has multiple exits: It makes sure the file is closed once the function is left. But your main requires the new.txt to be closed after the copy, before re-opening it. So don't defer the close here.
BTW: Your defense against log.Fatal terminating the code without calling your defers is, well, at least strange. The files are all put into some proper state by the OS, there is absolutely no need to complicate the stuff like this.
Check the error from the second Copy:
2015/12/22 19:00:33
Deferred log:
unexpected EOF
exit status 1
The thing is, you need to close zdest immediately after you've done writing. Close it after the first Copy and it works.
I would have suggested to use io.MultiWriter.
In this way you read only once from src. Not much gain for small files but is faster for bigger files.
w := io.MultiWriter(dest, os.Stdout)

How to edit a reader in Go

I'm trying to work out what the best practise is to change some data in a stream without ioutil.ReadAll.
I need to remove lines beginning with a certain character and strip all instances of another.
package main
import (
"bufio"
"bytes"
"fmt"
"os"
"gopkg.in/pg.v3"
)
func main() {
fieldSep := "\x01"
badChar := "\x02"
comment := "#"
dbName := "foo"
db := pg.Connect(&pg.Options{})
file, err := os.Open("/path/to/file")
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
}
defer file.Close()
// I need to iterate my file Reader here
// all lines that begin with comment and remove them
scanner := bufio.NewScanner(file)
for scanner.Scan() {
file := bytes.TrimRight(file, comment)
}
// all instances of badChar should be dropped
file := bytes.Trim(file, badChar)
_, err = db.CopyFrom(file, fmt.Sprintf("COPY %s FROM STDIN WITH DELIMITER e'%s'", dbName, fieldSep))
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
}
err = db.Close()
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
}
fmt.Println("Import Done")
}
Context:
I'm to importing a large amount (>10GB) of data into a database, it's spread across several files.
My database interface accepts a reader to load the data.
The data has non-standard line endings and I need to strip comments (because PG's COPY FROM is no fun).
I know the code I've got to edit the stream is woeful, I just can't find a good reference - thanks!
If I was in your position, I'd make my own Reader, and insert it between the source and the destination. That's what consistent interfaces are for. Your reader would work easily on the small chunks of data along as they flow past.
Source (io.Reader) ==> Your filter (io.Reader) ==> Destination (expects an io.Reader)
provides the data does the transformations rock'n'rolls
A library example of such a reader that's made to be inserted between a reader and its client is bufio.Reader, that'll let you speed up many types of readers by buffering larger calls to the source, and letting the client consume the data in small bits if it likes it so. You can check out its source : http://golang.org/src/bufio/bufio.go

Trying to write input from keyboard into a file in Golang

I am trying to take input from the keyboard and then store it in a text file but I am a bit confused on how to actually do it.
My current code is as follow at the moment:
// reads the file txt.txt
bs, err := ioutil.ReadFile("text.txt")
if err != nil {
panic(err)
}
// Prints out content
textInFile := string(bs)
fmt.Println(textInFile)
// Standard input from keyboard
var userInput string
fmt.Scanln(&userInput)
//Now I want to write input back to file text.txt
//func WriteFile(filename string, data []byte, perm os.FileMode) error
inputData := make([]byte, len(userInput))
err := ioutil.WriteFile("text.txt", inputData, )
There are so many functions in the "os" and "io" packages. I am very confused about which one I actually should use for this purpose.
I am also confused about what the third argument in the WriteFile function should be. In the documentation is says of type " perm os.FileMode" but since I am new to programming and Go I am a bit clueless.
Does anybody have any tips on how to proced?
Thanks in advance,
Marie
// reads the file txt.txt
bs, err := ioutil.ReadFile("text.txt")
if err != nil { //may want logic to create the file if it doesn't exist
panic(err)
}
var userInput []string
var err error = nil
var n int
//read in multiple lines from user input
//until user enters the EOF char
for ln := ""; err == nil; n, err = fmt.Scanln(ln) {
if n > 0 { //we actually read something into the string
userInput = append(userInput, ln)
} //if we didn't read anything, err is probably set
}
//open the file to append to it
//0666 corresponds to unix perms rw-rw-rw-,
//which means anyone can read or write it
out, err := os.OpenFile("text.txt", os.O_APPEND, 0666)
defer out.Close() //we'll close this file as we leave scope, no matter what
if err != nil { //assuming the file didn't somehow break
//write each of the user input lines followed by a newline
for _, outLn := range userInput {
io.WriteString(out, outLn+"\n")
}
}
I've made sure this compiles and runs on play.golang.org, but I'm not at my dev machine, so I can't verify that it's interacting with Stdin and the file entirely correctly. This should get you started though.
For example,
package main
import (
"fmt"
"io/ioutil"
"os"
)
func main() {
fname := "text.txt"
// print text file
textin, err := ioutil.ReadFile(fname)
if err == nil {
fmt.Println(string(textin))
}
// append text to file
f, err := os.OpenFile(fname, os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0666)
if err != nil {
panic(err)
}
var textout string
fmt.Scanln(&textout)
_, err = f.Write([]byte(textout))
if err != nil {
panic(err)
}
f.Close()
// print text file
textin, err = ioutil.ReadFile(fname)
if err != nil {
panic(err)
}
fmt.Println(string(textin))
}
If you simply want to append the user's input to a text file, you could just read the
input as you've already done and use ioutil.WriteFile, as you've tried to do.
So you already got the right idea.
To make your way go, the simplified solution would be this:
// Read old text
current, err := ioutil.ReadFile("text.txt")
// Standard input from keyboard
var userInput string
fmt.Scanln(&userInput)
// Append the new input to the old using builtin `append`
newContent := append(current, []byte(userInput)...)
// Now write the input back to file text.txt
err = ioutil.WriteFile("text.txt", newContent, 0666)
The last parameter of WriteFile is a flag which specifies the various options for
files. The higher bits are options like file type (os.ModeDir, for example) and the lower
bits represent the permissions in form of UNIX permissions (0666, in octal format, stands for user rw, group rw, others rw). See the documentation for more details.
Now that your code works, we can improve it. For example by keeping the file open
instead of opening it twice:
// Open the file for read and write (O_RDRW), append to it if it has
// content, create it if it does not exit, use 0666 for permissions
// on creation.
file, err := os.OpenFile("text.txt", os.O_RDWR|os.O_APPEND|os.O_CREATE, 0666)
// Close the file when the surrounding function exists
defer file.Close()
// Read old content
current, err := ioutil.ReadAll(file)
// Do something with that old content, for example, print it
fmt.Println(string(current))
// Standard input from keyboard
var userInput string
fmt.Scanln(&userInput)
// Now write the input back to file text.txt
_, err = file.WriteString(userInput)
The magic here is, that you use the flag os.O_APPEND while opening the file,
which makes file.WriteString() append. Note that you need to close the file after
opening it, which we do after the function exists using the defer keyword.

Reading log files as they're updated in Go

I'm trying to parse some log files as they're being written in Go but I'm not sure how I would accomplish this without rereading the file again and again while checking for changes.
I'd like to be able to read to EOF, wait until the next line is written and read to EOF again, etc. It feels a bit like how tail -f looks.
I have written a Go package -- github.com/hpcloud/tail -- to do exactly this.
t, err := tail.TailFile("/var/log/nginx.log", tail.Config{Follow: true})
for line := range t.Lines {
fmt.Println(line.Text)
}
...
Quoting kostix's answer:
in real life files might be truncated, replaced or renamed (because that's what tools like logrotate are supposed to do).
If a file gets truncated, it will automatically be re-opened. To support re-opening renamed files (due to logrotate, etc.), you can set Config.ReOpen, viz.:
t, err := tail.TailFile("/var/log/nginx.log", tail.Config{
Follow: true,
ReOpen: true})
for line := range t.Lines {
fmt.Println(line.Text)
}
Config.ReOpen is analogous to tail -F (capital F):
-F The -F option implies the -f option, but tail will also check to see if the file being followed has been
renamed or rotated. The file is closed and reopened when tail detects that the filename being read from
has a new inode number. The -F option is ignored if reading from standard input rather than a file.
You have to either watch the file for changes (using an OS-specific subsystem to accomplish this) or poll it periodically to see whether its modification time (and size) changed. In either case, after reading another chunk of data you remember the file offset and restore it before reading another chunk after detecting the change.
But note that this seems to be easy only on paper: in real life files might be truncated, replaced or renamed (because that's what tools like logrotate are supposed to do).
See this question for more discussion of this problem.
A simple example:
package main
import (
"bufio"
"fmt"
"io"
"os"
"time"
)
func tail(filename string, out io.Writer) {
f, err := os.Open(filename)
if err != nil {
panic(err)
}
defer f.Close()
r := bufio.NewReader(f)
info, err := f.Stat()
if err != nil {
panic(err)
}
oldSize := info.Size()
for {
for line, prefix, err := r.ReadLine(); err != io.EOF; line, prefix, err = r.ReadLine() {
if prefix {
fmt.Fprint(out, string(line))
} else {
fmt.Fprintln(out, string(line))
}
}
pos, err := f.Seek(0, io.SeekCurrent)
if err != nil {
panic(err)
}
for {
time.Sleep(time.Second)
newinfo, err := f.Stat()
if err != nil {
panic(err)
}
newSize := newinfo.Size()
if newSize != oldSize {
if newSize < oldSize {
f.Seek(0, 0)
} else {
f.Seek(pos, io.SeekStart)
}
r = bufio.NewReader(f)
oldSize = newSize
break
}
}
}
}
func main() {
tail("x.txt", os.Stdout)
}
I'm also interested in doing this, but haven't (yet) had the time to tackle it. One approach that occurred to me is to let "tail" do the heavy lifting. It would likely make your tool platform-specific, but that may be ok. The basic idea would be to use Cmd from the "os/exec" package to follow the file. You could fork a process that was the equivalent of "tail --retry --follow=name prog.log", and then listen to it's Stdout using the Stdout reader on the the Cmd object.
Sorry I know it's just a sketch, but maybe it's helpful.
There are many ways to do this. In modern POSIX based Operating Systems, one can use the inotify interface to do this.
One can use this package: https://github.com/fsnotify/fsnotify
Sample code:
watcher, err := fsnotify.NewWatcher()
if err != nil {
log.Fatal(err)
}
done := make(chan bool)
err = watcher.Add(fileName)
if err != nil {
log.Fatal(err)
}
for {
select {
case event := <-watcher.Events:
if event.Op&fsnotify.Write == fsnotify.Write {
log.Println("modified file:", event.Name)
}
}
Hope this helps!

Resources