Beginner Golang sequence clarification - go

I am trying GO as a complete newbie to programming. I have a doubt in sequence inside the following code. This code scans for user input.
func main() {
fmt.Print("Enter a number: \n")
var input float64
fmt.Scanf("%f", &input)
output := input * 2
fmt.Println(output)
But, after the string display, at the new line when I enter any number, it should just go into a buffer of some sort or become junk value. I say so, because the allocation of scanned input starts after the first line. Had it been the first or prior step, it would perfectly make sense.

The value you enter is allocated at the input memory space (using the &). The input variable is created before the scanf (line 2) so there is no problem at all with the order of your instruction. Maybe you can clarify ?

Related

Is There a Scanner Function in Go That Separates on Length (or is Newline Agnostic)?

I have two types of files in go which can be represented by the following strings:
const nonewline := 'hello' # content but no newline
const newline := `hello\nworld' # content with newline
My goal is just to read all the content from both files (it's coming in via a stream, so I cannot use something built in like ReadAll, i'm using stdioPipe) and include newlines where they appear.
I'm using Scanner but it APPEARS that there's no way to tell if the line terminates with a newline, and if I use Scanner.Text() it auto-splits (making it impossible to tell if a line ends in a newline, or the line just terminated at the end of the file).
I've also looked at writing a custom Split function, but isn't that overkill? I just need to split on some fixed length (I assume the default buffer size - 4096), or whatever is left in the file, whichever is shorter.
I've also looking at Scanner.Split(bufio.ScanBytes) but is there a speed up by chunking the read?
Anyhow, this seems like something that should be really straightforward.
Use this loop to read a stream in fixed size chunks:
chunk := make([]byte, size) // Size is the chunk size.
for {
n, err := io.ReadFull(stream, chunk)
if n > 0 {
// Do something with the chunk of data.
process(chunk[:n])
}
if err != nil {
break
}
}

Input not taken by fmt.Scanf or fmt.Scanln

I've been trying to write a program which takes in integer input from the user and performs some calculation. What's happening is that every alternate time, the program ends prematurely without having taken in any input. Both Scanf and Scanln follow the same behavior.
The relevant code:
func main() {
var N int
var output []int
fmt.Println("Enter test cases")
//This bottom line executes only every alternate time
fmt.Scanf("%d", &N)
testCases(N, N, output)
}
It prints the line "Enter test cases" and the program terminates. But when I run the program once more, it follows through. This pattern then repeats every time I try to run the program.
Better use bufio package, it implements buffered I/O.
scanf/scanln are unbuffered.
scanner := bufio.NewScanner(os.Stdin)
scanner.Scan()
input := scanner.Text()

reading same input with bufio and scanf has different results

I was trying to write a simple program that reads some answers from the terminal from the user to some questions.For instance the queries are:
5+5
1+2
8+3
and the user should give the answer.My problem it that when I user bufio.ReadString and the compare the input with the real answer it doesn't work properly,how ever when I use scanf everything is fine.here is my code:
//scanner := bufio.NewReader(os.Stdin)
var correctAnswers int8 = 0
for _, pro := range problems {
fmt.Println(pro.question)
//answer,_ := scanner.ReadString('\n')
var idk string
fmt.Scanf("%s\n", &idk)
//print(answer)
println(pro.answer)
if idk == pro.answer {
fmt.Println("Correct :)")
correctAnswers++
} else {
fmt.Println("Sorry!")
}
}
fmt.Printf("You answered %d out of %d problems correctly \n", correctAnswers, len(problems))
as you can see I commented out bufio. The intersting thing is that when I print the answer that the user gave me it bufio.ReadString correctly got the input from terminal but in the if clause it doesn't work!
bufio.Reader.ReadString:
ReadString reads until the first occurrence of delim in the input, returning a string containing the data up to and including the delimiter.
The value returned from ReadString includes the \n on the end.

Read random lines off a text file in go

I am using encoding/csv to read and parse a very large .csv file.
I need to randomly select lines and pass them through some test.
My current solution is to read the whole file like
reader := csv.NewReader(file)
lines, err := reader.ReadAll()
then randomly select lines from lines
The obvious problem is it takes a long time to read the whole thing and I need lots of memory.
Question:
my question is, encoding/csv gives me an io/reader is there a way to use that to read random lines instead of loading the whole thing at once?
This is more of a curiosity to learn more about io/reader than a practical question, since it is very likely that in the end it is more efficient to read it once and access it in memory, that to keep seeking random lines off on the disk.
Apokalyptik's answer is the closest to what you want. Readers are streamers so you can't just hop to a random place (per-se).
Naively choosing a probability against which you keep any given line as you read it in can lead to problems: you may get to the end of the file without holding enough lines of input, or you may be too quick to hold lines and not get a good sample. Either is much more likely than guessing correctly, since you don't know beforehand how many lines are in the file (unless you first iterate it once to count them).
What you really need is reservoir sampling.
Basically, read the file line-by-line. Each line, you choose whether to hold it like so: The first line you read, you have a 1/1 chance of holding it. After you read the second line, you have 1/2 chance of replacing what you're holding with this one. After the third line, you have a 1/2 * 2/3 = 1/3 chance of holding onto that one instead. Thus you have a 1/N chance of holding onto any given line, where N is the number of lines you've read in. Here's a more detailed look at the algorithm (don't try to implement it just from what I've told you in this paragraph alone).
The simplest solution would be to make a decision as you read each line whether to test it or throw it away... make your decision random so that you don't have the requirement of keeping the entire thing in RAM... then pass through the file once running your tests... you can also do this same style with non-random distribution tests (e.g. after X bytes, or x lines, etc)
My suggestion would be to randomize the input file in advance, e.g. using shuf
http://en.wikipedia.org/wiki/Shuf
Then you can simply read the first n lines as needed.
This doesn't help you learning more about io/readers, but might solve your problem nevertheless.
I had a similar need: to randomly read (specific) lines from a massive text file. I wrote a package that I call ramcsv to do this.
It first reads through the entire file once and marks the byte offset of each line (it stores this information in memory, but does not store the full line).
When you request a line number, it will transparently seek to the correct offset and give you the csv-parsed line.
(Note that the csv.Reader parameter that is passed as the second argument to ramcsv.New is used only to copy the settings into a new reader.) This could no doubt be made more efficient, but it was sufficient for my needs and spared me from reading a ~20GB text file into memory.
encoding/csv does not give you an io.Reader it gives you a csv.Reader (note the lack of package qualification on the definition of csv.NewReader [1] indicating that the Reader it returns belongs to the same package.
A csv.Reader implements only the methods you see there, so it looks like there is no way to do what you want short of writing your own CSV parser.
[1] http://golang.org/pkg/encoding/csv/#NewReader
Per this SO answer, there's a relatively memory efficient way to read a single random line from a large file.
package main
import (
"bufio"
"bytes"
"fmt"
"io"
"math/rand"
"strconv"
"time"
)
var words []byte
func main() {
prepareWordsVar()
var r = rand.New(rand.NewSource(time.Now().Unix()))
var line string
for len(line) == 0 {
line = getRandomLine(r)
}
fmt.Println(line)
}
func prepareWordsVar() {
base := []string{"some", "really", "file", "with", "many", "manyy", "manyyy", "manyyyy", "manyyyyy", "lines."}
words = make([]byte, 200*len(base))
for i := 0; i < 200; i++ {
for _, s := range base {
words = append(words, []byte(s+strconv.Itoa(i)+"\n")...)
}
}
}
func getRandomLine(r *rand.Rand) string {
wordsLen := int64(len(words))
offset := r.Int63n(wordsLen)
rd := bytes.NewReader(words)
scanner := bufio.NewScanner(rd)
_, _ = rd.Seek(offset, io.SeekStart)
// discard - bound to be partial line
if !scanner.Scan() {
return ""
}
scanner.Scan()
if err := scanner.Err(); err != nil {
fmt.Printf("err: %s\n", err)
return ""
}
// now we have a random line.
return scanner.Text()
}
Go Playground
Couple of caveats:
You should use crypto/rand if you need it to be cryptographically secure.
Note the bufio.Scanner's default MaxScanTokenSize, and adjust code accordingly.
As per original SO answer, this does introduce bias based on the length of the line.

How to be definite about the number of whitespace fmt.Fscanf consumes?

I am trying to implement a PPM decoder in Go. PPM is an image format that consists of a plaintext header and then some binary image data. The header looks like this (from the spec):
Each PPM image consists of the following:
A "magic number" for identifying the file type. A ppm image's magic number is the two characters "P6".
Whitespace (blanks, TABs, CRs, LFs).
A width, formatted as ASCII characters in decimal.
Whitespace.
A height, again in ASCII decimal.
Whitespace.
The maximum color value (Maxval), again in ASCII decimal. Must be less than 65536 and more than zero.
A single whitespace character (usually a newline).
I try to decode this header with the fmt.Fscanf function. The following call to
fmt.Fscanf parses the header (not addressing the caveat explained below):
var magic string
var width, height, maxVal uint
fmt.Fscanf(input,"%2s %d %d %d",&magic,&width,&height,&maxVal)
The documentation of fmt states:
Note: Fscan etc. can read one character (rune) past the input they
return, which means that a loop calling a scan routine may skip some
of the input. This is usually a problem only when there is no space
between input values. If the reader provided to Fscan implements
ReadRune, that method will be used to read characters. If the reader
also implements UnreadRune, that method will be used to save the
character and successive calls will not lose data. To attach ReadRune
and UnreadRune methods to a reader without that capability, use
bufio.NewReader.
As the very next character after the final whitespace is already the beginning of the image data, I have to be certain about how many whitespace fmt.Fscanf did consume after reading MaxVal. My code must work on whatever reader the was provided by the caller and parts of it must not read past the end of the header, therefore wrapping stuff into a buffered reader is not an option; the buffered reader might read more from the input than I actually want to read.
Some testing suggests that parsing a dummy character at the end solves the issues:
var magic string
var width, height, maxVal uint
var dummy byte
fmt.Fscanf(input,"%2s %d %d %d%c",&magic,&width,&height,&maxVal,&dummy)
Is that guaranteed to work according to the specification?
No, I would not consider that safe. While it works now, the documentation states that the function reserves the right to read past the value by one character unless you have an UnreadRune() method.
By wrapping your reader in a bufio.Reader, you can ensure the reader has an UnreadRune() method. You will then need to read the final whitespace yourself.
buf := bufio.NewReader(input)
fmt.Fscanf(buf,"%2s %d %d %d",&magic,&width,&height,&maxVal)
buf.ReadRune() // remove next rune (the whitespace) from the buffer.
Edit:
As we discussed in the chat, you can assume the dummy char method works and then write a test so you know when it stops working. The test can be something like:
func TestFmtBehavior(t *testing.T) {
// use multireader to prevent r from implementing io.RuneScanner
r := io.MultiReader(bytes.NewReader([]byte("data ")))
n, err := fmt.Fscanf(r, "%s%c", new(string), new(byte))
if n != 2 || err != nil {
t.Error("failed scan", n, err)
}
// the dummy char read 1 extra char past "data".
// one byte should still remain
if n, err := r.Read(make([]byte, 5)); n != 1 {
t.Error("assertion failed", n, err)
}
}

Resources