I need to read a file at a specific location, given by a byte offset.
filePath := "test_file.txt"
byteOffset := 6
// Read file
How can I achieve this, if possible without reading the whole file in memory ?
Package os
import "os"
func (*File) Seek
func (f *File) Seek(offset int64, whence int) (ret int64, err error)
Seek sets the offset for the next Read or Write on file to offset,
interpreted according to whence: 0 means relative to the origin of the
file, 1 means relative to the current offset, and 2 means relative to
the end. It returns the new offset and an error, if any. The behavior
of Seek on a file opened with O_APPEND is not specified.
Package io
import "io"
Seek whence values.
const (
SeekStart = 0 // seek relative to the origin of the file
SeekCurrent = 1 // seek relative to the current offset
SeekEnd = 2 // seek relative to the end
)
For example,
package main
import (
"fmt"
"io"
"os"
)
func main() {
filePath := "test.file"
byteOffset := 6
f, err := os.Open(filePath)
if err != nil {
panic(err)
}
defer f.Close()
_, err = f.Seek(int64(byteOffset), io.SeekStart)
if err != nil {
panic(err)
}
buf := make([]byte, 16)
n, err := f.Read(buf[:cap(buf)])
buf = buf[:n]
if err != nil {
if err != io.EOF {
panic(err)
}
}
fmt.Printf("%s\n", buf)
}
Output:
$ cat test.file
0123456789
$ go run seek.go
6789
$
Related
We can find the byte offset of a pattern from file by
"grep -ob pattern filename";
However, grep is not utf8 safe.
How do I find byte offset of a pattern in Go? The file is process log, which can be in TB.
This is what I want to get in Go:
$ cat fname
hello world
findme
hello 世界
findme again
...
$ grep -ob findme fname
12:findme
32:findme
FindAllStringIndex(s string, n int) returns byte start/finish indexes (i.e., slices) of all successive matches of the expression:
package main
import "fmt"
import "io/ioutil"
import "regexp"
func main() {
fname := "C:\\Users\\UserName\\go\\src\\so56798431\\fname"
b, err := ioutil.ReadFile(fname)
if err != nil {
panic(err)
}
re, err := regexp.Compile("findme")
if err != nil {
// handle error
}
fmt.Println(re.FindAllStringIndex(string(b), -1))
}
Output:
[[12 18] [32 38]]
Note: I did this on Microsoft Windows, but saved the file in UNIX format (linefeed); if input file saved in Windows format (carriage return & linefeed) the byte offsets would increment to 13 and 35, respectively.
UPDATE: for large files, use bufio.Scanner; for example:
package main
import (
"bufio"
"fmt"
"log"
"os"
"regexp"
)
func main() {
fname, err := os.Open("C:\\Users\\UserName\\go\\src\\so56798431\\fname")
if err != nil {
log.Fatal(err)
}
defer fname.Close()
re, err := regexp.Compile("findme")
if err != nil {
// handle error
}
scanner := bufio.NewScanner(fname)
bytesRead := 0
for scanner.Scan() {
b := scanner.Text()
//fmt.Println(b)
results := re.FindAllStringIndex(b, -1)
for _, result := range results {
fmt.Println(bytesRead + result[0])
}
// account for UNIX EOL marker
bytesRead += len(b) + 1
}
if err := scanner.Err(); err != nil {
log.Fatal(err)
}
}
Output:
12
32
I'm writing a string to a file, and I'd like to get the offset of the string which was just written.
Here is the code writing the file:
package main
import (
"os"
)
func main() {
path := "test_file.txt"
byteString := []byte("string to write")
f, err := os.OpenFile(path, os.O_APPEND|os.O_WRONLY, 0600)
if err != nil {
panic(err)
}
defer f.Close()
if _, err = f.Write(byteString); err != nil {
panic(err)
}
}
How can I get the offset after having written the line ?
os.Write only returns the length of the bytes written. If you want the offset, you can either:
Call os.Stat before writing, and then use os.WriteAt to write at the offset for the end of the file provided by the FileInfo structure.
Call os.Stat after writing, and subtract the length written to the file from the new size.
I have a flat file that has 339276 line of text in it for a size of 62.1 MB. I am attempting to read in all the lines, parse them based on some conditions I have and then insert them into a database.
I originally attempted to use a bufio.Scan() loop and bufio.Text() to get the line but I was running out of buffer space. I switched to using bufio.ReadLine/ReadString/ReadByte (I tried each) and had the same problem with each. I didn't have enough buffer space.
I tried using read and setting the buffer size but as the document says it actually a const that can be made smaller but never bigger that 64*1024 bytes. I then tried to use File.ReadAt where I set the starting postilion and moved it along as I brought in each section to no avail. I have looked at the following examples and explanations (not an exhaustive list):
Read text file into string array (and write)
How to Read last lines from a big file with Go every 10 secs
reading file line by line in go
How do I read in an entire file (either line by line or the whole thing at once) into a slice so I can then go do things to the lines?
Here is some code that I have tried:
file, err := os.Open(feedFolder + value)
handleError(err)
defer file.Close()
// fileInfo, _ := file.Stat()
var linesInFile []string
r := bufio.NewReader(file)
for {
path, err := r.ReadLine("\n") // 0x0A separator = newline
linesInFile = append(linesInFile, path)
if err == io.EOF {
fmt.Printf("End Of File: %s", err)
break
} else if err != nil {
handleError(err) // if you return error
}
}
fmt.Println("Last Line: ", linesInFile[len(linesInFile)-1])
Here is something else I tried:
var fileSize int64 = fileInfo.Size()
fmt.Printf("File Size: %d\t", fileSize)
var bufferSize int64 = 1024 * 60
bytes := make([]byte, bufferSize)
var fullFile []byte
var start int64 = 0
var interationCounter int64 = 1
var currentErr error = nil
for currentErr != io.EOF {
_, currentErr = file.ReadAt(bytes, st)
fullFile = append(fullFile, bytes...)
start = (bufferSize * interationCounter) + 1
interationCounter++
}
fmt.Printf("Err: %s\n", currentErr)
fmt.Printf("fullFile Size: %s\n", len(fullFile))
fmt.Printf("Start: %d", start)
var currentLine []string
for _, value := range fullFile {
if string(value) != "\n" {
currentLine = append(currentLine, string(value))
} else {
singleLine := strings.Join(currentLine, "")
linesInFile = append(linesInFile, singleLine)
currentLine = nil
}
}
I am at a loss. Either I don't understand exactly how the buffer works or I don't understand something else. Thanks for reading.
bufio.Scan() and bufio.Text() in a loop perfectly works for me on a files with much larger size, so I suppose you have lines exceeded buffer capacity. Then
check your line ending
and which Go version you use path, err :=r.ReadLine("\n") // 0x0A separator = newline? Looks like func (b *bufio.Reader) ReadLine() (line []byte, isPrefix bool, err error) has return value isPrefix specifically for your use case
http://golang.org/pkg/bufio/#Reader.ReadLine
It's not clear that it's necessary to read in all the lines before parsing them and inserting them into a database. Try to avoid that.
You have a small file: "a flat file that has 339276 line of text in it for a size of 62.1 MB." For example,
package main
import (
"bytes"
"fmt"
"io"
"io/ioutil"
)
func readLines(filename string) ([]string, error) {
var lines []string
file, err := ioutil.ReadFile(filename)
if err != nil {
return lines, err
}
buf := bytes.NewBuffer(file)
for {
line, err := buf.ReadString('\n')
if len(line) == 0 {
if err != nil {
if err == io.EOF {
break
}
return lines, err
}
}
lines = append(lines, line)
if err != nil && err != io.EOF {
return lines, err
}
}
return lines, nil
}
func main() {
// a flat file that has 339276 lines of text in it for a size of 62.1 MB
filename := "flat.file"
lines, err := readLines(filename)
fmt.Println(len(lines))
if err != nil {
fmt.Println(err)
return
}
}
It seems to me this variant of readLines is shorter and faster than suggested peterSO
func readLines(filename string) (map[int]string, error) {
lines := make(map[int]string)
data, err := ioutil.ReadFile(filename)
if err != nil {
return nil, err
}
for n, line := range strings.Split(string(data), "\n") {
lines[n] = line
}
return lines, nil
}
package main
import (
"fmt"
"os"
"log"
"bufio"
)
func main() {
FileName := "assets/file.txt"
file, err := os.Open(FileName)
if err != nil {
log.Fatal(err)
}
defer file.Close()
scanner := bufio.NewScanner(file)
for scanner.Scan() {
fmt.Println(scanner.Text())
}
}
I am trying to parse a file that annoying consists of many separately zipped segments. I have parsed these segments one at a time into a slice of bytes and I want to uncompress them as I go.
Here is my current code that does the decompressing, which doesn't work. from and to are just set at the top as an example, in reality they are set by the code. data is the byte array containing the entire file. I don't want to seek it while it's on disk because its location on another server, so it's only realistic for me to load the entire file to []byte first and then parse it.
from, to := 0, 1000;
b := bytes.NewReader(data[from:from+to])
z, err := zlib.NewReader(b)
CheckErr(err)
defer z.Close()
p := make([]byte,0,1024)
z.Read(p)
fmt.Println(string(p))
So how is it so massively difficult just to unzip a slice of bytes? Anyway...
The problem appears to with how I am reading it out. Where it says z.Read, that doesn't seem to do anything.
How can I read the entire thing in one go into a slice of bytes?
Here's an outline for you. Note: In Go, CHECK FOR ERRORS!
package main
import (
"bytes"
"compress/zlib"
"fmt"
"io/ioutil"
)
func readSegment(data []byte, from, to int) ([]byte, error) {
b := bytes.NewReader(data[from : from+to])
z, err := zlib.NewReader(b)
if err != nil {
return nil, err
}
defer z.Close()
p, err := ioutil.ReadAll(z)
if err != nil {
return nil, err
}
return p, nil
}
func main() {
from, to := 0, 1000
data := make([]byte, from+to)
// ** parse input segments into data **
p, err := readSegment(data, from, to)
if err != nil {
fmt.Println(err)
return
}
fmt.Println(string(p))
}
Use ReadAll(r io.Reader) ([]byte, error) from the io/ioutil package.
p, err := ioutil.ReadAll(b)
fmt.Println(string(p))
Read only reads up to the length of the given slice (1024 bytes in your case).
To read in chunks of 1024 bytes:
p := make([]byte,1024)
for {
numBytes, err := l.Read(p)
if err == io.EOF {
// you are done, numBytes might be less than len(p)
break
}
// do what you want with p
}
If you are getting the data from a webserver, you might even do
import (
"net/http"
"io/ioutil"
)
...
resp, errGet := http.Get("http://example.com/somefile")
// do error handling
z, errZ := zlib.NewReader(resp.Body)
// do error handling
resp.Body.Close()
p, err := ioutil.ReadAll(b)
// do error handling
since resp.Body happens to be an io.Reader as most io related types.
In Go, I want to read in a file line by line, into str's or []rune's.
The file should be encoded in UTF-8, but my program shouldn't trust it. If it contains invalid UTF-8, I want to properly handle the error.
There is bytes.Runes(s []byte) []rune, but that has no error return value. Will it panic on encountering invalid UTF-8?
For example,
package main
import (
"bufio"
"fmt"
"io/ioutil"
"os"
"strings"
"unicode/utf8"
)
func main() {
tFile := "text.txt"
t := []byte{'\xFF', '\n'}
ioutil.WriteFile(tFile, t, 0666)
f, err := os.Open(tFile)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
defer f.Close()
r := bufio.NewReader(f)
s, err := r.ReadString('\n')
if err != nil {
fmt.Println(err)
os.Exit(1)
}
s = strings.TrimRight(s, "\n")
fmt.Println(t, s, []byte(s))
if !utf8.ValidString(s) {
fmt.Println("!utf8.ValidString")
}
}
Output:
[255 10] � [255]
!utf8.ValidString
For example:
import (
"io/ioutil"
"log"
"unicode/utf8"
)
// ...
buf, err := ioutil.ReadAll(fname)
if error != nil {
log.Fatal(err)
}
size := 0
for start := 0; start < len(buf); start += size {
var r rune
if r, size = utf8.DecodeRune(buf[start:]); r == utf8.RuneError {
log.Fatalf("invalid utf8 encoding at ofs %d", start)
}
}
utf8.DecodeRune godocs:
DecodeRune unpacks the first UTF-8 encoding in p and returns the rune
and its width in bytes. If the encoding is invalid, it returns
(RuneError, 1), an impossible result for correct UTF-8.