Is There a Scanner Function in Go That Separates on Length (or is Newline Agnostic)? - go

I have two types of files in go which can be represented by the following strings:
const nonewline := 'hello' # content but no newline
const newline := `hello\nworld' # content with newline
My goal is just to read all the content from both files (it's coming in via a stream, so I cannot use something built in like ReadAll, i'm using stdioPipe) and include newlines where they appear.
I'm using Scanner but it APPEARS that there's no way to tell if the line terminates with a newline, and if I use Scanner.Text() it auto-splits (making it impossible to tell if a line ends in a newline, or the line just terminated at the end of the file).
I've also looked at writing a custom Split function, but isn't that overkill? I just need to split on some fixed length (I assume the default buffer size - 4096), or whatever is left in the file, whichever is shorter.
I've also looking at Scanner.Split(bufio.ScanBytes) but is there a speed up by chunking the read?
Anyhow, this seems like something that should be really straightforward.

Use this loop to read a stream in fixed size chunks:
chunk := make([]byte, size) // Size is the chunk size.
for {
n, err := io.ReadFull(stream, chunk)
if n > 0 {
// Do something with the chunk of data.
process(chunk[:n])
}
if err != nil {
break
}
}

Related

golang reading a large number using bufio.NewScanner

I tried to get the input (many number with space) and converted it to slice.
number of numbers is up to 300,000
I got an error and I googled it. and there's some problem with buffer size.
so I wrote the code as below.
func ChangeToInt(input string) []int {
var nums []int
for _, word := range strings.Fields(input) {
num, _ := strconv.Atoi(word)
nums = append(nums, num)
}
return nums
}
scanner := bufio.NewScanner(os.Stdin)
maxCapacity := 4*300000
buf := make([]byte, maxCapacity)
scanner.Buffer(buf, maxCapacity)
scanner.Scan()
input := scanner.Text()
nums := ChangeToInt(input)
but still not working. what's the problem?
You are using bufio.Scanner to read your input. By default bufio.Scanner reads lines, and it uses an internal buffer to store the line. By default the line may have a max length of bufio.MaxScanTokenSize which is 64 KB. If your lines are longer than this, you'll get an error.
The internal buffer size may be changed / increased using the Scanner.Buffer() method, but if your input is a space separated list of numbers, I'd advise to change the split function of the Scanner.
As mentioned earlier, by default the scanner splits input by lines. Instead change it to split the input by words. The bufio package has a "ready" split function for that: bufio.Scanwords. Use it like this:
scanner := bufio.NewScanner(os.Stdin)
scanner.Split(bufio.ScanWords)
Now the scanner.Text() will return the words (numbers in your case) instead of complete lines, so the default 64 KB limit now applies to words, not lines. Your numbers should be less than 64 KB.
Also do check if scanning succeeds by calling scanner.Err().
You can use bufio.NewReader and its ReadString('\n') method. It will be good for large input data.

Remove \r and \n (CRLF) from serial input

I'm currently writing Go code which reads a sensor value through an Arduino using the serial port. Currently I am getting "\r" and "\n" in my output. I know in Python, you can do:
line = line.decode('utf-8')
to get rid of the characters. How would you do that using Golang? I'm fairly new to the language so any help would be appreciated! Here is what a snippet of the output looks like currently:
"arduinoLED\"}\r\n{\"temperature\"
Also if anyone could let me know how I can read a line in Go (similar to Python's line.readline()) that would be great.
Many thanks!
If you read a stream by lines using a default bufio.Scanner (which is the usual way) then both regular (\n) and CRLF (\r\n) line breaks will be discarded:
doc := "Hello\nWorld!\nGoodbye,\r\nnewlines!\r\n"
scanner := bufio.NewScanner(bytes.NewReader([]byte(doc)))
for scanner.Scan() {
fmt.Printf("%q\n", scanner.Text()) // Note our own newline here
}
if err := scanner.Err(); err != nil {
panic(err) // TODO: handle error properly
}
// Prints:
// "Hello,"
// "World!"
// "Goodbye,"
// "newlines!"
Of course, instead of the bytes reader in the example above you'll probably have an existing Reader but the usage should be identical otherwise.

Read random lines off a text file in go

I am using encoding/csv to read and parse a very large .csv file.
I need to randomly select lines and pass them through some test.
My current solution is to read the whole file like
reader := csv.NewReader(file)
lines, err := reader.ReadAll()
then randomly select lines from lines
The obvious problem is it takes a long time to read the whole thing and I need lots of memory.
Question:
my question is, encoding/csv gives me an io/reader is there a way to use that to read random lines instead of loading the whole thing at once?
This is more of a curiosity to learn more about io/reader than a practical question, since it is very likely that in the end it is more efficient to read it once and access it in memory, that to keep seeking random lines off on the disk.
Apokalyptik's answer is the closest to what you want. Readers are streamers so you can't just hop to a random place (per-se).
Naively choosing a probability against which you keep any given line as you read it in can lead to problems: you may get to the end of the file without holding enough lines of input, or you may be too quick to hold lines and not get a good sample. Either is much more likely than guessing correctly, since you don't know beforehand how many lines are in the file (unless you first iterate it once to count them).
What you really need is reservoir sampling.
Basically, read the file line-by-line. Each line, you choose whether to hold it like so: The first line you read, you have a 1/1 chance of holding it. After you read the second line, you have 1/2 chance of replacing what you're holding with this one. After the third line, you have a 1/2 * 2/3 = 1/3 chance of holding onto that one instead. Thus you have a 1/N chance of holding onto any given line, where N is the number of lines you've read in. Here's a more detailed look at the algorithm (don't try to implement it just from what I've told you in this paragraph alone).
The simplest solution would be to make a decision as you read each line whether to test it or throw it away... make your decision random so that you don't have the requirement of keeping the entire thing in RAM... then pass through the file once running your tests... you can also do this same style with non-random distribution tests (e.g. after X bytes, or x lines, etc)
My suggestion would be to randomize the input file in advance, e.g. using shuf
http://en.wikipedia.org/wiki/Shuf
Then you can simply read the first n lines as needed.
This doesn't help you learning more about io/readers, but might solve your problem nevertheless.
I had a similar need: to randomly read (specific) lines from a massive text file. I wrote a package that I call ramcsv to do this.
It first reads through the entire file once and marks the byte offset of each line (it stores this information in memory, but does not store the full line).
When you request a line number, it will transparently seek to the correct offset and give you the csv-parsed line.
(Note that the csv.Reader parameter that is passed as the second argument to ramcsv.New is used only to copy the settings into a new reader.) This could no doubt be made more efficient, but it was sufficient for my needs and spared me from reading a ~20GB text file into memory.
encoding/csv does not give you an io.Reader it gives you a csv.Reader (note the lack of package qualification on the definition of csv.NewReader [1] indicating that the Reader it returns belongs to the same package.
A csv.Reader implements only the methods you see there, so it looks like there is no way to do what you want short of writing your own CSV parser.
[1] http://golang.org/pkg/encoding/csv/#NewReader
Per this SO answer, there's a relatively memory efficient way to read a single random line from a large file.
package main
import (
"bufio"
"bytes"
"fmt"
"io"
"math/rand"
"strconv"
"time"
)
var words []byte
func main() {
prepareWordsVar()
var r = rand.New(rand.NewSource(time.Now().Unix()))
var line string
for len(line) == 0 {
line = getRandomLine(r)
}
fmt.Println(line)
}
func prepareWordsVar() {
base := []string{"some", "really", "file", "with", "many", "manyy", "manyyy", "manyyyy", "manyyyyy", "lines."}
words = make([]byte, 200*len(base))
for i := 0; i < 200; i++ {
for _, s := range base {
words = append(words, []byte(s+strconv.Itoa(i)+"\n")...)
}
}
}
func getRandomLine(r *rand.Rand) string {
wordsLen := int64(len(words))
offset := r.Int63n(wordsLen)
rd := bytes.NewReader(words)
scanner := bufio.NewScanner(rd)
_, _ = rd.Seek(offset, io.SeekStart)
// discard - bound to be partial line
if !scanner.Scan() {
return ""
}
scanner.Scan()
if err := scanner.Err(); err != nil {
fmt.Printf("err: %s\n", err)
return ""
}
// now we have a random line.
return scanner.Text()
}
Go Playground
Couple of caveats:
You should use crypto/rand if you need it to be cryptographically secure.
Note the bufio.Scanner's default MaxScanTokenSize, and adjust code accordingly.
As per original SO answer, this does introduce bias based on the length of the line.

How to be definite about the number of whitespace fmt.Fscanf consumes?

I am trying to implement a PPM decoder in Go. PPM is an image format that consists of a plaintext header and then some binary image data. The header looks like this (from the spec):
Each PPM image consists of the following:
A "magic number" for identifying the file type. A ppm image's magic number is the two characters "P6".
Whitespace (blanks, TABs, CRs, LFs).
A width, formatted as ASCII characters in decimal.
Whitespace.
A height, again in ASCII decimal.
Whitespace.
The maximum color value (Maxval), again in ASCII decimal. Must be less than 65536 and more than zero.
A single whitespace character (usually a newline).
I try to decode this header with the fmt.Fscanf function. The following call to
fmt.Fscanf parses the header (not addressing the caveat explained below):
var magic string
var width, height, maxVal uint
fmt.Fscanf(input,"%2s %d %d %d",&magic,&width,&height,&maxVal)
The documentation of fmt states:
Note: Fscan etc. can read one character (rune) past the input they
return, which means that a loop calling a scan routine may skip some
of the input. This is usually a problem only when there is no space
between input values. If the reader provided to Fscan implements
ReadRune, that method will be used to read characters. If the reader
also implements UnreadRune, that method will be used to save the
character and successive calls will not lose data. To attach ReadRune
and UnreadRune methods to a reader without that capability, use
bufio.NewReader.
As the very next character after the final whitespace is already the beginning of the image data, I have to be certain about how many whitespace fmt.Fscanf did consume after reading MaxVal. My code must work on whatever reader the was provided by the caller and parts of it must not read past the end of the header, therefore wrapping stuff into a buffered reader is not an option; the buffered reader might read more from the input than I actually want to read.
Some testing suggests that parsing a dummy character at the end solves the issues:
var magic string
var width, height, maxVal uint
var dummy byte
fmt.Fscanf(input,"%2s %d %d %d%c",&magic,&width,&height,&maxVal,&dummy)
Is that guaranteed to work according to the specification?
No, I would not consider that safe. While it works now, the documentation states that the function reserves the right to read past the value by one character unless you have an UnreadRune() method.
By wrapping your reader in a bufio.Reader, you can ensure the reader has an UnreadRune() method. You will then need to read the final whitespace yourself.
buf := bufio.NewReader(input)
fmt.Fscanf(buf,"%2s %d %d %d",&magic,&width,&height,&maxVal)
buf.ReadRune() // remove next rune (the whitespace) from the buffer.
Edit:
As we discussed in the chat, you can assume the dummy char method works and then write a test so you know when it stops working. The test can be something like:
func TestFmtBehavior(t *testing.T) {
// use multireader to prevent r from implementing io.RuneScanner
r := io.MultiReader(bytes.NewReader([]byte("data ")))
n, err := fmt.Fscanf(r, "%s%c", new(string), new(byte))
if n != 2 || err != nil {
t.Error("failed scan", n, err)
}
// the dummy char read 1 extra char past "data".
// one byte should still remain
if n, err := r.Read(make([]byte, 5)); n != 1 {
t.Error("assertion failed", n, err)
}
}

Removing NUL characters from bytes

To teach myself Go I'm building a simple server that takes some input, does some processing, and sends output back to the client (that includes the original input).
The input can vary in length from around 5 - 13 characters + endlines and whatever other guff the client sends.
The input is read into a byte array and then converted to a string for some processing. Another string is appended to this string and the whole thing is converted back into a byte array to get sent back to the client.
The problem is that the input is padded with a bunch of NUL characters, and I'm not sure how to get rid of them.
So I could loop through the array and when I come to a nul character, note the length (n), create a new byte array of that length, and copy the first n characters over to the new byte array and use that. Is that the best way, or is there something to make this easier for me?
Some stripped down code:
data := make([]byte, 16)
c.Read(data)
s := strings.Replace(string(data[:]), "an", "", -1)
s = strings.Replace(s, "\r", "", -1)
s += "some other string"
response := []byte(s)
c.Write(response)
c.close()
Also if I'm doing anything else obviously stupid here it would be nice to know.
In package "bytes", func Trim(s []byte, cutset string) []byte is your friend:
Trim returns a subslice of s by slicing off all leading and trailing UTF-8-encoded Unicode code points contained in cutset.
// Remove any NULL characters from 'b'
b = bytes.Trim(b, "\x00")
Your approach sounds basically right. Some remarks:
When you have found the index of the first nul byte in data, you don't need to copy, just truncate the slice: data[:idx].
bytes.Index should be able to find that index for you.
There is also bytes.Replace so you don't need to convert to string.
The io.Reader documentation says:
Read reads up to len(p) bytes into p. It returns the number of bytes read (0 <= n <= len(p)) and any error encountered.
If the call to Read in the application does not read 16 bytes, then data will have trailing zero bytes. Use the number of bytes read to trim the zero bytes from the buffer.
data := make([]byte, 16)
n, err := c.Read(data)
if err != nil {
// handle error
}
data = data[:n]
There's another issue. There's no guarantee that Read slurps up all of the "message" sent by the peer. The application may need to call Read more than once to get the complete message.
You mention endlines in the question. If the message from the client is terminated but a newline, then use bufio.Scanner to read lines from the connection:
s := bufio.NewScanner(c)
if s.Scan() {
data = s.Bytes() // data is next line, not including end lines, etc.
}
if s.Err() != nil {
// handle error
}
You could utilize the return value of Read:
package main
import "strings"
func main() {
r, b := strings.NewReader("north east south west"), make([]byte, 16)
n, e := r.Read(b)
if e != nil {
panic(e)
}
b = b[:n]
println(string(b) == "north east south")
}
https://golang.org/pkg/io#Reader

Resources