When appending to a [][]string profiling shows the app uses around 145MiB of memory.
defer profile.Start(profile.MemProfile).Stop()
f, _ := os.Open("test.csv") // 100 MiB File
r := csv.NewReader(f)
var records [][]string
for {
values, err := r.Read()
if err == io.EOF {
break
}
records = append(records, values)
}
When storing the slice in a struct and appending that the app uses around 260MiB of memory.
defer profile.Start(profile.MemProfile).Stop()
type record struct {
values []string
}
f, _ := os.Open("test.csv") // 100 MiB File
r := csv.NewReader(f)
var records []record
for {
values, err := r.Read()
if err == io.EOF {
break
}
r := record{values: values}
records = append(records, r)
}
It feels like it's using double the memory in the second example. Can someone explain why the second example uses more memory?
For those of you who are using between go 1.12 and go 1.15, debug.FreeOSMemory() would not have returned the free memory back to OS, so htop/top will show wrong number or if you rely on RSS to monitor your app, it would be wrong.
This is due to the fact that runtime in Golang (1.12 - 1.15) uses MADV_FREE rather than MADV_DONTNEED.<= Go (1.11) and Go (1.16 - which was released couple of days back) use MADV_DONTNEED.
Go 1.16 reverted back to MADV_DONTNEED.
Please find the changelog image and change log URL here.
Please upgrade to get predicatable analytics on memory usage. If you want to use GoLang(1.12-1.15) and still want to use MADV_DONTNEED, Kindly run your binaries using GODEBUG=madvdontneed=1 ./main.
Related
The objective of my backend service is to process 90 milllion data and at least 10 million of data in 1 day.
My system config:
Ram 2000 Mb
CPU 2core(s)
what I am doing right now is something like this:
var wg sync.WaitGroup
//length of evs is 4455
for i, ev := range evs {
wg.Add(1)
go migrate(&wg)
}
wg.Wait()
func migrate(wg *sync.WaitGroup) {
defer wg.Done()
//processing
time.Sleep(time.Second)
}
Without knowing more detail about the type of work you need to do, your approach seems good. Some things to think about:
Re-using variables and or clients in your processing loop. For example reusing an HTTP client instead of recreating one.
Depending on how your use case calls to handle failures. It might be efficient to use erroGroup. It's a convenience wrapper that stops all the threads on error possibly saving you a lot of time.
In the migrate function be sure to be aware of the caveats regarding closure and goroutines.
func main() {
g := new(errgroup.Group)
var urls = []string{
"http://www.someasdfasdfstupidname.com/",
"ftp://www.golang.org/",
"http://www.google.com/",
}
for _, url := range urls {
url := url // https://golang.org/doc/faq#closures_and_goroutines
g.Go(func() error {
resp, err := http.Get(url)
if err == nil {
resp.Body.Close()
}
return err
})
}
fmt.Println("waiting")
if err := g.Wait(); err == nil {
fmt.Println("Successfully fetched all URLs.")
} else {
fmt.Println(err)
}
}
I have got the solution. to achieve this much huge processing what I have done is
a limited number of goroutine to 50 and increased the number of cores from 2 to 5.
I need to regularly load over 300'000 rows x 78 columns of data into my Go program.
Currently I use (import github.com/360EntSecGroup-Skylar/excelize):
xlsx, err := excelize.OpenFile("/media/test snaps.xlsm")
if err != nil {
fmt.Println(err)
return
}
//read all rows into df
df := xlsx.GetRows("data")
It takes about 4 minutes on a decent PC using Samsung 960 EVO Series - M.2 Internal SSD.
Is there a faster way to load this data? Currently it takes me more time to read the data than processing it. I'm also opened to other file formats.
As suggested in the comments, instead of using the XLS format, use a custom, fast data format for reading and writing your table.
In the most basic case, just write the number of columns and rows to a binary file, then write all the data in one go. This will be very fast, I have created a little example here which just writes 300.000 by 40 float32s to a file and reads them back. On my machine this takes about 400ms and 250ms (notice that the file is hot in cache after writing it, may take longer on initial read).
package main
import (
"encoding/binary"
"os"
"github.com/gonutz/tic"
)
func main() {
const (
rowCount = 300000
colCount = 40
)
values := make([]float32, rowCount*colCount)
func() {
defer tic.Toc()("write")
f, _ := os.Create("file")
defer f.Close()
binary.Write(f, binary.LittleEndian, int64(rowCount))
binary.Write(f, binary.LittleEndian, int64(colCount))
check(binary.Write(f, binary.LittleEndian, values))
}()
func() {
defer tic.Toc()("read")
f, _ := os.Open("file")
defer f.Close()
var rows, cols int64
binary.Read(f, binary.LittleEndian, &rows)
binary.Read(f, binary.LittleEndian, &cols)
vals := make([]float32, rows*cols)
check(binary.Read(f, binary.LittleEndian, vals))
}()
}
func check(err error) {
if err != nil {
panic(err)
}
}
I'm currently learning Go and have started to re-write a test-data-generation program I originally wrote in Java. I've been intrigued by Go's channels / threading possibilities, as many of the programs I've written have been focused around load testing a system / recording various metrics.
Here, I am creating some data to be written out to a CSV file. I started out by generating all of the data, then passing that off to be written to a file. I then thought I'd try and implement a channel, so data could be written while it's still being generated.
It worked - it almost eliminated the overhead of generating the data first and then writing it. However, I found that this only worked if I had a channel with a buffer big enough to cope with all of the test data being generated: c := make(chan string, count), where count is the same size as the number of test data lines I am generating.
So, to my question: I'm regularly generating millions of records of test data (load test applications) - should I be using a channel with a buffer that large? I can't find much about restrictions on the size of the buffer?
Running the below with a 10m count completes in ~59.5s; generating the data up front and writing it all to a file takes ~62s; using a buffer length of 1 - 100 takes ~80s.
const externalRefPrefix = "Ref"
const fileName = "citizens.csv"
var counter int32 = 0
func WriteCitizensForApplication(applicationId string, count int) {
file, err := os.Create(fileName)
if err != nil {
panic(err)
}
defer file.Close()
c := make(chan string, count)
go generateCitizens(applicationId, count, c)
for line := range c {
file.WriteString(line)
}
}
func generateCitizens(applicationId string, count int, c chan string) {
for i := 0; i < count; i++ {
c <- fmt.Sprintf("%v%v\n", applicationId, generateExternalRef())
}
close(c)
}
func generateExternalRef() string {
atomic.AddInt32(&COUNTER, 1)
return fmt.Sprintf("%v%08d", externalRefPrefix, counter)
}
I am getting started with RethinkDB, I have never used it before. I give it a try together with Gorethink following this tutorial.
To sum up this tutorial, there are two programs:
The first one updates entries infinitely.
for {
var scoreentry ScoreEntry
pl := rand.Intn(1000)
sc := rand.Intn(6) - 2
res, err := r.Table("scores").Get(strconv.Itoa(pl)).Run(session)
if err != nil {
log.Fatal(err)
}
err = res.One(&scoreentry)
scoreentry.Score = scoreentry.Score + sc
_, err = r.Table("scores").Update(scoreentry).RunWrite(session)
}
And the second one, receives this changes and logs them.
res, err := r.Table("scores").Changes().Run(session)
var value interface{}
if err != nil {
log.Fatalln(err)
}
for res.Next(&value) {
fmt.Println(value)
}
In the statistics that RethinkDB shows, I can see that there are 1.5K reads and writes per second. But in the console of the second program, I see 1 or 2 changes per second approximately.
Why does this occur? Am I missing something?
This code:
r.Table("scores").Update(scoreentry).RunWrite(session)
Probably doesn't do what you think it does. This attempts to update every document in the table by merging scoreentry into it. This is why the RethinkDB console is showing so many writes per second: every time you run that query it's resulting in thousands of writes.
Usually you want to update documents inside of ReQL, like so:
r.Table('scores').Get(strconv.Itoa(pl)).Update(func (row Term) interface{} {
return map[string]interface{}{"Score": row.GetField('Score').Add(sc)};
})
If you need to do the update in Go code, though, you can replace just that one document like so:
r.Table('scores').Get(strconv.Itoa(pl)).Replace(scoreentry)
Im not sure why it is quite that slow, it could be because by default each query blocks until the write has been completely flushed. I would first add some kind of instrumentation to see which operation is being so slow. There are also a couple of ways that you can improve the performance:
Set the Durability of the write using UpdateOpts
_, err = r.Table("scores").Update(scoreentry, r.UpdateOpts{
Durability: "soft",
}).RunWrite(session)
Execute each query in a goroutine to allow your code to execute multiple queries in parallel (you may need to use a pool of goroutines instead but this code is just a simplified example)
for {
go func() {
var scoreentry ScoreEntry
pl := rand.Intn(1000)
sc := rand.Intn(6) - 2
res, err := r.Table("scores").Get(strconv.Itoa(pl)).Run(session)
if err != nil {
log.Fatal(err)
}
err = res.One(&scoreentry)
scoreentry.Score = scoreentry.Score + sc
_, err = r.Table("scores").Update(scoreentry).RunWrite(session)
}()
}
for an assignment we are using go and one of the things we are going to do is to parse a uniprotdatabasefile line-by-line to collect uniprot-records.
I prefer not to share too much code, but I have a working code snippet that does parse such a file (2.5 GB) correctly in 48 s (measured using the time go-package). It parses the file iteratively and add lines to a record until a record end signal is reached (a full record), and metadata on the record is created. Then the record string is nulled, and a new record is collected line-by-line. Then I thought that I would try to use go-routines.
I have got some tips before from stackoverflow, and then to the original code I simple added a function to handle everything concerning the metadata-creation.
So, the code is doing
create an empty record,
iterate the file and add lines to the record,
if a record stop signal is found (now we have a full record) - give it to a go routine to create the metadata
null the record string and continue from 2).
I also added a sync.WaitGroup() to make sure that I waited (in the end) for each routine to finish. I thought that this would actually lower the time spent on parsing the databasefile as it continued to parse while the goroutines would act on each record. However, the code seems to run for more than 20 minutes indicating that something is wrong or the overhead went crazy. Any suggestions?
package main
import (
"bufio"
"crypto/sha1"
"fmt"
"io"
"log"
"os"
"strings"
"sync"
"time"
)
type producer struct {
parser uniprot
}
type unit struct {
tag string
}
type uniprot struct {
filenames []string
recordUnits chan unit
recordStrings map[string]string
}
func main() {
p := producer{parser: uniprot{}}
p.parser.recordUnits = make(chan unit, 1000000)
p.parser.recordStrings = make(map[string]string)
p.parser.collectRecords(os.Args[1])
}
func (u *uniprot) collectRecords(name string) {
fmt.Println("file to open ", name)
t0 := time.Now()
wg := new(sync.WaitGroup)
record := []string{}
file, err := os.Open(name)
errorCheck(err)
scanner := bufio.NewScanner(file)
for scanner.Scan() { //Scan the file
retText := scanner.Text()
if strings.HasPrefix(retText, "//") {
wg.Add(1)
go u.handleRecord(record, wg)
record = []string{}
} else {
record = append(record, retText)
}
}
file.Close()
wg.Wait()
t1 := time.Now()
fmt.Println(t1.Sub(t0))
}
func (u *uniprot) handleRecord(record []string, wg *sync.WaitGroup) {
defer wg.Done()
recString := strings.Join(record, "\n")
t := hashfunc(recString)
u.recordUnits <- unit{tag: t}
u.recordStrings[t] = recString
}
func hashfunc(record string) (hashtag string) {
hash := sha1.New()
io.WriteString(hash, record)
hashtag = string(hash.Sum(nil))
return
}
func errorCheck(err error) {
if err != nil {
log.Fatal(err)
}
}
First of all: your code is not thread-safe. Mainly because you're accessing a hashmap
concurrently. These are not safe for concurrency in go and need to be locked. Faulty line in your code:
u.recordStrings[t] = recString
As this will blow up when you're running go with GOMAXPROCS > 1, I'm assuming that you're not doing that. Make sure you're running your application with GOMAXPROCS=2 or higher to achieve parallelism.
The default value is 1, therefore your code runs on one single OS thread which, of course, can't be scheduled on two CPU or CPU cores simultaneously. Example:
$ GOMAXPROCS=2 go run udb.go uniprot_sprot_viruses.dat
At last: pull the values from the channel or otherwise your program will not terminate.
You're creating a deadlock if the number of goroutines exceeds your limit. I tested with a
76MiB file of data, you said your file was about 2.5GB. I have 16347 entries. Assuming linear growth,
your file will exceed 1e6 and therefore there are not enough slots in the channel and your program
will deadlock, giving no result while accumulating goroutines which don't run to fail at the end
(miserably).
So the solution should be to add a go routine which pulls the values from the channel and does
something with them.
As a side note: If you're worried about performance, do not use strings as they're always copied. Use []byte instead.