I have a Go function that appends a line to a file:
func AppendLine(p string, s string) error {
f, err := os.OpenFile(p, os.O_APPEND|os.O_WRONLY, 0600)
defer f.Close()
if err != nil {
return errors.WithStack(err)
}
_, err = f.WriteString(s + "\n")
return errors.WithStack(err)
}
I'm wondering if the flags os.O_APPEND|os.O_WRONLY make this a safe operation. Is there a guarantee that no matter what happens (even if the process gets shut off in the middle of writing) the existing file contents cannot be deleted?
os package is a wrapper around systems calls so you have guarantees provided by operation system. In this case linux OS guarantees that file opened with O_APPEND flag would be processed atomically http://man7.org/linux/man-pages/man2/open.2.html
i am new to golang, when i read the code example of package "archtive/tar",i read some code like this:
// Iterate through the files in the archive.
for {
hdr, err := tr.Next()
if err == io.EOF {
// end of tar archive
break
}
if err != nil {
log.Fatalln(err)
}
fmt.Printf("Contents of %s:\n", hdr.Name)
if _, err := io.Copy(os.Stdout, tr); err != nil {
log.Fatalln(err)
}
fmt.Println()
}
the output just like this:
Contents of readme.txt:
This archive contains some text files.
Contents of gopher.txt:
Gopher names:
George
Geoffrey
Gonzo
Contents of todo.txt:
Get animal handling license.
can anyone tell me how the programe print the body of the struct? thank you.
You left out a vital piece of the example, the two lines preceding what you posted.
// Open the tar archive for reading.
r := bytes.NewReader(buf.Bytes())
tr := tar.NewReader(r)
This creates a tar.Reader which implements io.Reader. The statement io.Copy(os.Stdout, tr) in the if statement knows how to copy the contents of the reader to Stdout.
Godoc for tar.Reader
Also might be useful to note that the code example in the package documentation doesn't ever write the tar it creates to disk. It is all done in memory using bytes.Buffers. Examples of writing to disk would be in the io package.
In python, I find the context managers really helpful. I was trying to find the same in Go.
e.g:
with open("filename") as f:
do something here
where open is a context manager in python handling the entry and exit, which implicitly takes care of closing the file opened.
Instead of we explicitly doing like this:
f := os.Open("filename")
//do something here
defer f.Close()
Can this be done in Go as well ? Thanks in advance.
No, you can't, but you can create the same illusion with a little wrapper func:
func WithFile(fname string, fn func(f *os.File) error) error {
f, err := os.Open(fname)
if err != nil {
return err
}
defer f.Close()
return fn(f)
}
I'm currently saving a struct to file so it can be loaded and later used by implementing gob, as follows:
func (t *Object) Load(filename string) error {
fi, err := os.Open(filename)
if err !=nil {
return err
}
defer fi.Close()
fz, err := gzip.NewReader(fi)
if err !=nil {
return err
}
defer fz.Close()
decoder := gob.NewDecoder(fz)
err = decoder.Decode(&t)
if err !=nil {
return err
}
return nil
}
func (t *Object) Save(filename string) error {
fi, err := os.Create(filename)
if err !=nil {
return err
}
defer fi.Close()
fz := gzip.NewWriter(fi)
defer fz.Close()
encoder := gob.NewEncoder(fz)
err = encoder.Encode(t)
if err !=nil {
return err
}
return nil
}
My concern is that Go might be updated in a way that changes the way that gobs of data are encoding and decoded. If this happens then the version of my app compiled with the new version of Go would not be able to load files saved from the previous version. This would be a major issue but I'm not sure if its a realistic concern or not.
So does anyone know if I can consider it safe to save and load gob encoding data like this and expect it to still work when Go is updated?
If not, what would be the best alternative? Would my function still work if I changed gob.NewDecoder and gob.NewEncoder to xml.NewDecoder and xml.NewEncoder? (Does the XML encoder encode and decode structs in the same way as gob, i.e. without me having to tell it what they look like?)
The documentation for the type GobEncoder does mention:
Note: Since gobs can be stored permanently, It is good design to guarantee the encoding used by a GobEncoder is stable as the software evolves.
For instance, it might make sense for GobEncode to include a version number in the encoding.
But that applies to custom encoder.
For the one provided with go, the compatibility is guarantee at source level: Backwards-incompatible changes will not be made to any Go 1 point release.
That should mean gob should continue to work as it does now.
A different and robust solution exists with projects like "ugorji/go/codec":
High Performance and Feature-Rich Idiomatic Go Library providing encode/decode support for different serialization formats.
Supported Serialization formats are:
msgpack: https://github.com/msgpack/msgpack
binc: http://github.com/ugorji/binc
But unless you need those specific formats, gob should be enough.
I'm trying to parse some log files as they're being written in Go but I'm not sure how I would accomplish this without rereading the file again and again while checking for changes.
I'd like to be able to read to EOF, wait until the next line is written and read to EOF again, etc. It feels a bit like how tail -f looks.
I have written a Go package -- github.com/hpcloud/tail -- to do exactly this.
t, err := tail.TailFile("/var/log/nginx.log", tail.Config{Follow: true})
for line := range t.Lines {
fmt.Println(line.Text)
}
...
Quoting kostix's answer:
in real life files might be truncated, replaced or renamed (because that's what tools like logrotate are supposed to do).
If a file gets truncated, it will automatically be re-opened. To support re-opening renamed files (due to logrotate, etc.), you can set Config.ReOpen, viz.:
t, err := tail.TailFile("/var/log/nginx.log", tail.Config{
Follow: true,
ReOpen: true})
for line := range t.Lines {
fmt.Println(line.Text)
}
Config.ReOpen is analogous to tail -F (capital F):
-F The -F option implies the -f option, but tail will also check to see if the file being followed has been
renamed or rotated. The file is closed and reopened when tail detects that the filename being read from
has a new inode number. The -F option is ignored if reading from standard input rather than a file.
You have to either watch the file for changes (using an OS-specific subsystem to accomplish this) or poll it periodically to see whether its modification time (and size) changed. In either case, after reading another chunk of data you remember the file offset and restore it before reading another chunk after detecting the change.
But note that this seems to be easy only on paper: in real life files might be truncated, replaced or renamed (because that's what tools like logrotate are supposed to do).
See this question for more discussion of this problem.
A simple example:
package main
import (
"bufio"
"fmt"
"io"
"os"
"time"
)
func tail(filename string, out io.Writer) {
f, err := os.Open(filename)
if err != nil {
panic(err)
}
defer f.Close()
r := bufio.NewReader(f)
info, err := f.Stat()
if err != nil {
panic(err)
}
oldSize := info.Size()
for {
for line, prefix, err := r.ReadLine(); err != io.EOF; line, prefix, err = r.ReadLine() {
if prefix {
fmt.Fprint(out, string(line))
} else {
fmt.Fprintln(out, string(line))
}
}
pos, err := f.Seek(0, io.SeekCurrent)
if err != nil {
panic(err)
}
for {
time.Sleep(time.Second)
newinfo, err := f.Stat()
if err != nil {
panic(err)
}
newSize := newinfo.Size()
if newSize != oldSize {
if newSize < oldSize {
f.Seek(0, 0)
} else {
f.Seek(pos, io.SeekStart)
}
r = bufio.NewReader(f)
oldSize = newSize
break
}
}
}
}
func main() {
tail("x.txt", os.Stdout)
}
I'm also interested in doing this, but haven't (yet) had the time to tackle it. One approach that occurred to me is to let "tail" do the heavy lifting. It would likely make your tool platform-specific, but that may be ok. The basic idea would be to use Cmd from the "os/exec" package to follow the file. You could fork a process that was the equivalent of "tail --retry --follow=name prog.log", and then listen to it's Stdout using the Stdout reader on the the Cmd object.
Sorry I know it's just a sketch, but maybe it's helpful.
There are many ways to do this. In modern POSIX based Operating Systems, one can use the inotify interface to do this.
One can use this package: https://github.com/fsnotify/fsnotify
Sample code:
watcher, err := fsnotify.NewWatcher()
if err != nil {
log.Fatal(err)
}
done := make(chan bool)
err = watcher.Add(fileName)
if err != nil {
log.Fatal(err)
}
for {
select {
case event := <-watcher.Events:
if event.Op&fsnotify.Write == fsnotify.Write {
log.Println("modified file:", event.Name)
}
}
Hope this helps!