I am trying to write Student Marks to a csv file in GO .
It is printing the desired 10 result per page with Println but is saving only the last value (not all 10) in csv .
This is what I am doing
Visitor visits studentmarks.com/page=1
Marks for 10 students are displayed and it is also saved in CSV
Visitor clicks next page and he is navigated to studentmarks.com/page=2
Marks for another 10 students are displayed and it is also saved in subsequent column/rows in the CSV
and so on
fmt.Fprintf(w, KeyTemplate, key.fname, key.marks, key.lname ) is working fine and displays all 10 results per page but I am unable to save all 10 results in the CSV (with my current code, only the last result is saved).
Here is my snippet of the code that is responsible for printing and saving the results.
func PageRequest(w http.ResponseWriter, r *http.Request) {
// Default page number is 1
if len(r.URL.Path) <= 1 {
r.URL.Path = "/1"
}
// Page number is not negative or 0
page.Abs(page)
if page.Cmp(one) == -1 {
page.SetInt64(1)
}
// Page header
fmt.Fprintf(w, PageHeader, pages, previous, next)
// Marks for UID
UID, length := compute(start)
for i := 0; i < length; i++ {
key := UID[i]
fmt.Fprintf(w, key.fname, key.marks, key.lname, key.remarks)
// Save in csv
csvfile, err := os.Create("marks.csv")
if err != nil {
fmt.Println("Error:", err)
return
}
defer csvfile.Close()
records := [][]string{{key.fname, key.marks, key.lname, , key.remarks}}
writer := csv.NewWriter(csvfile)
for _, record := range records {
err := writer.Write(record)
if err != nil {
fmt.Println("Error:", err)
return
}
}
writer.Flush()
// Page Footer
fmt.Fprintf(w, PageFooter, previous, next)
}
How can I print and save (in csv) all the 10 results using go language?
The basic problem is that you are calling os.Create. The documentation for os.Create says
Create creates the named file with mode 0666 (before umask), truncating it if it already exists. If successful, methods on the returned File can be used for I/O; the associated file descriptor has mode O_RDWR. If there is an error, it will be of type *PathError.
So each call to os.Create will remove all content from the file you passed. Instead what you want is probably os.OpenFile with the os.O_CREATE, os.O_WRONLY and os.O_APPEND flags. This will make sure that the file will be created, if it doesn't exists, but won't truncate it.
But there is another problem in your code. You are calling defer csvfile.Close() inside the loop. A deferred function will only be executed once the function returns and not after the loop iteration. This can lead to problems, especially since you are opening the same file over and over again.
Instead you should open the file once before the loop so that you only need to close it once. Like this:
package main
import (
"encoding/csv"
"fmt"
"net/http"
"os"
)
func PageRequest(w http.ResponseWriter, r *http.Request) {
// Default page number is 1
if len(r.URL.Path) <= 1 {
r.URL.Path = "/1"
}
// Page number is not negative or 0
page.Abs(page)
if page.Cmp(one) == -1 {
page.SetInt64(1)
}
// Page header
fmt.Fprintf(w, PageHeader, pages, previous, next)
// Save in csv
csvfile, err := os.OpenFile("marks.csv", os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0666)
if err != nil {
fmt.Println("Error:", err)
return
}
defer csvfile.Close()
writer := csv.NewWriter(csvfile)
defer writer.Flush()
// Marks for UID
UID, length := compute(start)
for i := 0; i < length; i++ {
key := UID[i]
fmt.Fprintf(w, key.fname, key.marks, key.lname, key.remarks)
records := [][]string{{key.fname, key.marks, key.lname, key.remarks}}
for _, record := range records {
err := writer.Write(record)
if err != nil {
fmt.Println("Error:", err)
return
}
}
}
// Page Footer
fmt.Fprintf(w, PageFooter, previous, next)
}
Related
I completed the suggested go-tour, watched some tutorials and gopher-conferences on YouTube. And that's pretty much it.
I have a project which requires me to send get requests and store the results in files. But amount of URL's is around 80 million.
I'm testing with 1000 URLs only.
Problem: I think I couldn't managed to make it concurrent, although I've followed some guidelines. I don't know what's wrong. But maybe I'm wrong and it's concurrent, just did not seem fast to me, the speed felt like sequential requests.
Here is the code I've written:
package main
import (
"bufio"
"io/ioutil"
"log"
"net/http"
"os"
"sync"
"time"
)
var wg sync.WaitGroup // synchronization to wait for all the goroutines
func crawler(urlChannel <-chan string) {
defer wg.Done()
client := &http.Client{Timeout: 10 * time.Second} // single client is sufficient for multiple requests
for urlItem := range urlChannel {
req1, _ := http.NewRequest("GET", "http://"+urlItem, nil) // generating the request
req1.Header.Add("User-agent", "Mozilla/5.0 (X11; Linux i586; rv:31.0) Gecko/20100101 Firefox/74.0") // changing user-agent
resp1, respErr1 := client.Do(req1) // sending the prepared request and getting the response
if respErr1 != nil {
continue
}
defer resp1.Body.Close()
if resp1.StatusCode/100 == 2 { // means server responded with 2xx code
text1, readErr1 := ioutil.ReadAll(resp1.Body) // try to read the sourcecode of the website
if readErr1 != nil {
log.Fatal(readErr1)
}
f1, fileErr1 := os.Create("200/" + urlItem + ".txt") // creating the relative file
if fileErr1 != nil {
log.Fatal(fileErr1)
}
defer f1.Close()
_, writeErr1 := f1.Write(text1) // writing the sourcecode into our file
if writeErr1 != nil {
log.Fatal(writeErr1)
}
}
}
}
func main() {
file, err := os.Open("urls.txt") // the file containing the url's
if err != nil {
log.Fatal(err)
}
defer file.Close() // don't forget to close the file
urlChannel := make(chan string, 1000) // create a channel to store all the url's
scanner := bufio.NewScanner(file) // each line has another url
for scanner.Scan() {
urlChannel <- scanner.Text()
}
close(urlChannel)
_ = os.Mkdir("200", 0755) // if it's there, it will create an error, and we will simply ignore it
for i := 0; i < 10; i++ {
wg.Add(1)
go crawler(urlChannel)
}
wg.Wait()
}
My question is: why is this code not working concurrently? How can I solve the problem I've mentioned above. Is there something that I'm doing wrong for making concurrent GET requests?
Here's some code to get you thinking. I put the URLs in the code so it is self-sufficient, but you'd probably be piping them to stdin in practice. There's a few things I'm doing here that I think are improvements, or at least worth thinking about.
Before we get started, I'll point out that I put the complete url in the input stream. For one thing, this lets me support http and https both. I don't really see the logic behind hard coding the scheme in the code rather than leaving it in the data.
First, it can handle arbitrarily sized response bodies (your version reads the body into memory, so it is limited by some number of concurrent large requests filling memory). I do this with io.Copy().
[edited]
text1, readErr1 := ioutil.ReadAll(resp1.Body) reads the entire http body. If the body is large, it will take up lots of memory. io.Copy(f1,resp1.Body) would instead copy the data from the http response body directly to the file, without having to hold the whole thing in memory. It may be done in one Read/Write or many.
http.Response.Body is an io.ReadCloser because the HTTP protocol expects the body to be read progressively. http.Response does not yet have the entire body, until it is read. That's why it's not just a []byte. Writing it to the filesystem progressively while the data "streams" in from the tcp socket means that a finite amount of system resources can download an unlimited amount of data.
But there's even more benefit. io.Copy will call ReadFrom() on the file. If you look at the linux implementation (for example): https://golang.org/src/os/readfrom_linux.go , and dig a bit, you'll see it actually uses copy_file_range That system call is cool because
The copy_file_range() system call performs an in-kernel copy between two file descriptors without the additional cost of transferring data from the kernel to user space and then back into the kernel.
*os.File knows how to ask the kernel to deliver data directly from the tcp socket to the file without your program even having to touch it.
See https://golang.org/pkg/io/#Copy.
Second, I make sure to use all the url components in the filename. URLs with different query strings go to different files. The fragment probably doesn't differentiate response bodies, so including that in the path may be ill considered. There's no awesome heuristic for turning URLs into valid file paths - if this were a serious task, I'd probably store the data in files based on a shasum of the url or something - and create an index of results stored in a metadata file.
Third, I handle all errors. req1, _ := http.NewRequest(... might seem like a convenient shortcut, but what it really means is that you won't know the real cause of any errors - at best. I usually add some descriptive text to the errors when percolating up, to make sure I can easily tell which error I'm returning.
Finally, I return successfully processed URLs so that I can see the final results. When scanning millions of URLS, you'd probably also want a list of which failed, but a count of successful is a good start at sending final data back for summary.
package main
import (
"bufio"
"bytes"
"fmt"
"io"
"log"
"net/http"
"net/url"
"os"
"path/filepath"
"time"
)
const urls_text = `http://danf.us/
https://farrellit.net/?3=2
`
func crawler(urls <-chan *url.URL, done chan<- int) {
var processed int = 0
defer func() { done <- processed }()
client := http.Client{Timeout: 10 * time.Second}
for u := range urls {
if req, err := http.NewRequest("GET", u.String(), nil); err != nil {
log.Printf("Couldn't create new request for %s: %s", u.String(), err.Error())
} else {
req.Header.Add("User-agent", "Mozilla/5.0 (X11; Linux i586; rv:31.0) Gecko/20100101 Firefox/74.0") // changing user-agent
if res, err := client.Do(req); err != nil {
log.Printf("Failed to get %s: %s", u.String(), err.Error())
} else {
filename := filepath.Base(u.EscapedPath())
if filename == "/" || filename == "" {
filename = "response"
} else {
log.Printf("URL Filename is '%s'", filename)
}
destpath := filepath.Join(
res.Status, u.Scheme, u.Hostname(), u.EscapedPath(),
fmt.Sprintf("?%s",u.RawQuery), fmt.Sprintf("#%s",u.Fragment), filename,
)
if err := os.MkdirAll(filepath.Dir(destpath), 0755); err != nil {
log.Printf("Couldn't create directory %s: %s", filepath.Dir(destpath), err.Error())
} else if f, err := os.OpenFile(destpath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0644); err != nil {
log.Printf("Couldn't open destination file %s: %s", destpath, err.Error())
} else {
if b, err := io.Copy(f, res.Body); err != nil {
log.Printf("Could not copy %s body to %s: %s", u.String(), destpath, err.Error())
} else {
log.Printf("Copied %d bytes from body of %s to %s", b, u.String(), destpath)
processed++
}
f.Close()
}
res.Body.Close()
}
}
}
}
const workers = 3
func main() {
urls := make(chan *url.URL)
done := make(chan int)
var submitted int = 0
var inputted int = 0
var successful int = 0
for i := 0; i < workers; i++ {
go crawler(urls, done)
}
sc := bufio.NewScanner(bytes.NewBufferString(urls_text))
for sc.Scan() {
inputted++
if u, err := url.Parse(sc.Text()); err != nil {
log.Printf("Could not parse %s as url: %w", sc.Text(), err)
} else {
submitted++
urls <- u
}
}
close(urls)
for i := 0; i < workers; i++ {
successful += <-done
}
log.Printf("%d urls input, %d could not be parsed. %d/%d valid URLs successful (%.0f%%)",
inputted, inputted-submitted,
successful, submitted,
float64(successful)/float64(submitted)*100.0,
)
}
When setting up a concurrent pipeline, a good guideline to follow is to always first set up and instantiate the listeners that will execute concurrently (in your case, crawlers), and then start feeding them data through the pipeline (in your case, the urlChannel).
In your example, the only thing preventing a deadlock is the fact that you've instantiated a buffered channel with the same number of rows that your test file has (1000 rows). What the code does is it puts URLs inside the urlChannel. Since there are 1000 rows inside your file, the urlChannel can take all of them without blocking. If you put more URLs inside the file, the execution will block after filling up the urlChannel.
Here is the version of the code that should work:
package main
import (
"bufio"
"io/ioutil"
"log"
"net/http"
"os"
"sync"
"time"
)
func crawler(wg *sync.WaitGroup, urlChannel <-chan string) {
defer wg.Done()
client := &http.Client{Timeout: 10 * time.Second} // single client is sufficient for multiple requests
for urlItem := range urlChannel {
req1, _ := http.NewRequest("GET", "http://"+urlItem, nil) // generating the request
req1.Header.Add("User-agent", "Mozilla/5.0 (X11; Linux i586; rv:31.0) Gecko/20100101 Firefox/74.0") // changing user-agent
resp1, respErr1 := client.Do(req1) // sending the prepared request and getting the response
if respErr1 != nil {
continue
}
if resp1.StatusCode/100 == 2 { // means server responded with 2xx code
text1, readErr1 := ioutil.ReadAll(resp1.Body) // try to read the sourcecode of the website
if readErr1 != nil {
log.Fatal(readErr1)
}
resp1.Body.Close()
f1, fileErr1 := os.Create("200/" + urlItem + ".txt") // creating the relative file
if fileErr1 != nil {
log.Fatal(fileErr1)
}
_, writeErr1 := f1.Write(text1) // writing the sourcecode into our file
if writeErr1 != nil {
log.Fatal(writeErr1)
}
f1.Close()
}
}
}
func main() {
var wg sync.WaitGroup
file, err := os.Open("urls.txt") // the file containing the url's
if err != nil {
log.Fatal(err)
}
defer file.Close() // don't forget to close the file
urlChannel := make(chan string)
_ = os.Mkdir("200", 0755) // if it's there, it will create an error, and we will simply ignore it
// first, initialize crawlers
wg.Add(10)
for i := 0; i < 10; i++ {
go crawler(&wg, urlChannel)
}
//after crawlers are initialized, start feeding them data through the channel
scanner := bufio.NewScanner(file) // each line has another url
for scanner.Scan() {
urlChannel <- scanner.Text()
}
close(urlChannel)
wg.Wait()
}
I'm trying to improve the performance of an app.
One part of its code uploads a file to a server in chunks.
The original version simply does this in a sequential loop. However, it's slow and during the sequence it also needs to talk to another server before uploading each chunk.
The upload of chunks could simply be placed in a goroutine. It works, but is not a good solution because if the source file is extremely large it ends up using a large amount of memory.
So, I try to limit the number of active goroutines by using a buffered channel. Here is some code that shows my attempt. I've stripped it down to show the concept and you can run it to test for yourself.
package main
import (
"fmt"
"io"
"os"
"time"
)
const defaultChunkSize = 1 * 1024 * 1024
// Lets have 4 workers
var c = make(chan int, 4)
func UploadFile(f *os.File) error {
fi, err := f.Stat()
if err != nil {
return fmt.Errorf("err: %s", err)
}
size := fi.Size()
total := (int)(size/defaultChunkSize + 1)
// Upload parts
buf := make([]byte, defaultChunkSize)
for partno := 1; partno <= total; partno++ {
readChunk := func(offset int, buf []byte) (int, error) {
fmt.Println("readChunk", partno, offset)
n, err := f.ReadAt(buf, int64(offset))
if err != nil {
return n, err
}
return n, nil
}
// This will block if there are not enough worker slots available
c <- partno
// The actual worker.
go func() {
offset := (partno - 1) * defaultChunkSize
n, err := readChunk(offset, buf)
if err != nil && err != io.EOF {
return
}
err = uploadPart(partno, buf[:n])
if err != nil {
fmt.Println("Uploadpart failed:", err)
}
<-c
}()
}
return nil
}
func uploadPart(partno int, buf []byte) error {
fmt.Printf("Uploading partno: %d, buflen=%d\n", partno, len(buf))
// Actually upload the part. Lets test it by instead writing each
// buffer to another file. We can then use diff to compare the
// source and dest files.
// Open file. Seek to (partno - 1) * defaultChunkSize, write buffer
f, err := os.OpenFile("/home/matthewh/Downloads/out.tar.gz", os.O_CREATE|os.O_WRONLY, 0755)
if err != nil {
fmt.Printf("err: %s\n", err)
}
n, err := f.WriteAt(buf, int64((partno-1)*defaultChunkSize))
if err != nil {
fmt.Printf("err=%s\n", err)
}
fmt.Printf("%d bytes written\n", n)
defer f.Close()
return nil
}
func main() {
filename := "/home/matthewh/Downloads/largefile.tar.gz"
fmt.Printf("Opening file: %s\n", filename)
f, err := os.Open(filename)
if err != nil {
panic(err)
}
UploadFile(f)
}
It almost works. But there are several problems.
1) The final partno 22 is occuring 3 times. The correct length is actually 612545 as the file length isn't a multiple of 1MB.
// Sample output
...
readChunk 21 20971520
readChunk 22 22020096
Uploading partno: 22, buflen=1048576
Uploading partno: 22, buflen=612545
Uploading partno: 22, buflen=1048576
Another problem, the upload could fail and I am not familiar enough with go and how best to solve failure of the goroutine.
Finally, I want to ordinarily return some data from the uploadPart when it succeeds. Specifically, it'll be a string (an HTTP ETag header value). These etag values need to be collected by the main function.
What is a better way to structure this code in this instance? I've not yet found a good golang design pattern that correctly fulfills my needs here.
Skipping for the moment the question of how better to structure this code, I see a bug in your code which may be causing the problem you're seeing. Since the function you're running in the goroutine uses the variable partno, which changes with each iteration of the loop, your goroutine isn't necessarily seeing the value of partno at the time you invoked the goroutine. A common way of fixing this is to create a local copy of that variable inside the loop:
for partno := 1; partno <= total; partno++ {
partno := partno
// ...
}
Data race #1
Multiple goroutines are using the same buffer concurrently. Note that one gorouting may be filling it with a new chunk while another is still reading an old chunk from it. Instead, each goroutine should have it's own buffer.
Data race #2
As Andy Schweig has pointed, the value in partno is updated by the loop before the goroutine created in that iteration has a chance to read it. This is why the final partno 22 occurs multiple times. To fix it, you can pass partno as a argument to the anonymous function. That will ensure each goroutine has it's own part number.
Also, you can use a channel to pass the results from the workers. Maybe a struct type with the part number and error. That way, you will be able to observe the progress and retry failed uploads.
For an example of a good pattern check out this example from the GOPL book.
Suggested changes
As noted by dev.bmax buf moved into go routine, as noted by Andy Schweig partno is param to anon function, also added WaitGroup since UploadFile was exiting before uploads were complete. Also defer f.Close() file, good habit.
package main
import (
"fmt"
"io"
"os"
"sync"
"time"
)
const defaultChunkSize = 1 * 1024 * 1024
// wg for uploads to complete
var wg sync.WaitGroup
// Lets have 4 workers
var c = make(chan int, 4)
func UploadFile(f *os.File) error {
// wait for all the uploads to complete before function exit
defer wg.Wait()
fi, err := f.Stat()
if err != nil {
return fmt.Errorf("err: %s", err)
}
size := fi.Size()
fmt.Printf("file size: %v\n", size)
total := int(size/defaultChunkSize + 1)
// Upload parts
for partno := 1; partno <= total; partno++ {
readChunk := func(offset int, buf []byte, partno int) (int, error) {
fmt.Println("readChunk", partno, offset)
n, err := f.ReadAt(buf, int64(offset))
if err != nil {
return n, err
}
return n, nil
}
// This will block if there are not enough worker slots available
c <- partno
// The actual worker.
go func(partno int) {
// wait for me to be done
wg.Add(1)
defer wg.Done()
buf := make([]byte, defaultChunkSize)
offset := (partno - 1) * defaultChunkSize
n, err := readChunk(offset, buf, partno)
if err != nil && err != io.EOF {
return
}
err = uploadPart(partno, buf[:n])
if err != nil {
fmt.Println("Uploadpart failed:", err)
}
<-c
}(partno)
}
return nil
}
func uploadPart(partno int, buf []byte) error {
fmt.Printf("Uploading partno: %d, buflen=%d\n", partno, len(buf))
// Actually do the upload. Simulate long running task with a sleep
time.Sleep(time.Second)
return nil
}
func main() {
filename := "/home/matthewh/Downloads/largefile.tar.gz"
fmt.Printf("Opening file: %s\n", filename)
f, err := os.Open(filename)
if err != nil {
panic(err)
}
defer f.Close()
UploadFile(f)
}
I'm sure you can deal a little smarter with the buf situation. I'm just letting go deal with the garbage. Since you are limiting your workers to specific number 4 you really need only 4 x defaultChunkSize buffers. Please do share if you come up with something simple and shareworth.
Have fun!
I'm new to Golang, starting out with some examples. Currently, what I'm trying to do is reading a file line by line and replace it with another string in case it meets a certain condition.
The file is use for testing purposes contains four lines:
one
two
three
four
The code working on that file looks like this:
func main() {
file, err := os.OpenFile("test.txt", os.O_RDWR, 0666)
if err != nil {
panic(err)
}
reader := bufio.NewReader(file)
for {
fmt.Print("Try to read ...\n")
pos,_ := file.Seek(0, 1)
log.Printf("Position in file is: %d", pos)
bytes, _, _ := reader.ReadLine()
if (len(bytes) == 0) {
break
}
lineString := string(bytes)
if(lineString == "two") {
file.Seek(int64(-(len(lineString))), 1)
file.WriteString("This is a test.")
}
fmt.Printf(lineString + "\n")
}
file.Close()
}
As you can see in the code snippet, I want to replace the string "two" with "This is a test" as soon as this string is read from the file.
In order to get the current position within the file I use Go's Seek method.
However, what happens is that always the last line gets replaced by This is a test, making the file looking like this:
one
two
three
This is a test
Examining the output of the print statement which writes the current file position to the terminal, I get that kind of output after the first line has been read:
2016/12/28 21:10:31 Try to read ...
2016/12/28 21:10:31 Position in file is: 19
So after the first read, the position cursor already points to the end of my file, which explains why the new string gets appended to the end. Does anyone understand what is happening here or rather what is causing that behavior?
The Reader is not controller by the file.Seek. You have declared the reader as: reader := bufio.NewReader(file) and then you read one line at a time bytes, _, _ := reader.ReadLine() however the file.Seek does not change the position that the reader is reading.
Suggest you read about the ReadSeeker in the docs and switch over to using that. Also there is an example using the SectionReader.
Aside from the incorrect seek usage, the difficulty is that the line you're replacing isn't the same length as the replacement. The standard approach is to create a new (temporary) file with the modifications. Assuming that is successful, replace the original file with the new one.
package main
import (
"bufio"
"io"
"io/ioutil"
"log"
"os"
)
func main() {
// file we're modifying
name := "text.txt"
// open original file
f, err := os.Open(name)
if err != nil {
log.Fatal(err)
}
defer f.Close()
// create temp file
tmp, err := ioutil.TempFile("", "replace-*")
if err != nil {
log.Fatal(err)
}
defer tmp.Close()
// replace while copying from f to tmp
if err := replace(f, tmp); err != nil {
log.Fatal(err)
}
// make sure the tmp file was successfully written to
if err := tmp.Close(); err != nil {
log.Fatal(err)
}
// close the file we're reading from
if err := f.Close(); err != nil {
log.Fatal(err)
}
// overwrite the original file with the temp file
if err := os.Rename(tmp.Name(), name); err != nil {
log.Fatal(err)
}
}
func replace(r io.Reader, w io.Writer) error {
// use scanner to read line by line
sc := bufio.NewScanner(r)
for sc.Scan() {
line := sc.Text()
if line == "two" {
line = "This is a test."
}
if _, err := io.WriteString(w, line+"\n"); err != nil {
return err
}
}
return sc.Err()
}
For more complex replacements, I've implemented a package which can replace regular expression matches. https://github.com/icholy/replace
import (
"io"
"regexp"
"github.com/icholy/replace"
"golang.org/x/text/transform"
)
func replace2(r io.Reader, w io.Writer) error {
// compile multi-line regular expression
re := regexp.MustCompile(`(?m)^two$`)
// create replace transformer
tr := replace.RegexpString(re, "This is a test.")
// copy while transforming
_, err := io.Copy(w, transform.NewReader(r, tr))
return err
}
OS package has Expand function which I believe can be used to solve similar problem.
Explanation:
file.txt
one
two
${num}
four
main.go
package main
import (
"fmt"
"os"
)
var FILENAME = "file.txt"
func main() {
file, err := os.ReadFile(FILENAME)
if err != nil {
panic(err)
}
mapper := func(placeholderName string) string {
switch placeholderName {
case "num":
return "three"
}
return ""
}
fmt.Println(os.Expand(string(file), mapper))
}
output
one
two
three
four
Additionally, you may create a config (yml or json) and
populate that data in the map that can be used as a lookup table to store placeholders as well as their replacement strings and modify mapper part to use this table to lookup placeholders from input file.
e.g map will look like this,
table := map[string]string {
"num": "three"
}
mapper := func(placeholderName string) string {
if val, ok := table[placeholderName]; ok {
return val
}
return ""
}
References:
os.Expand documentation: https://pkg.go.dev/os#Expand
Playground
I am trying build a zip archive from a large number of small-medium sized files. I want to be able to do this concurrently, since compression is CPU intensive, and I'm running on a multi core server. Also I don't want to have the whole archive in memory, since its might turn out to be large.
My question is that do I have to compress every file and then combine manually combine everything together with zip header, checksum etc?
Any help would be greatly appreciated.
I don't think you can combine the zip headers.
What you could do is, run the zip.Writer sequentially, in a separate goroutine, and then spawn a new goroutine for each file that you want to read, and pipe those to the goroutine that is zipping them.
This should reduce the IO overhead that you get by reading the files sequentially, although it probably won't leverage multiple cores for the archiving itself.
Here's a working example. Note that, to keep things simple,
it does not handle errors nicely, just panics if something goes wrong,
and it does not use the defer statement too much, to demonstrate the order in which things should happen.
Since defer is LIFO, it can sometimes be confusing when you stack a lot of them together.
package main
import (
"archive/zip"
"io"
"os"
"sync"
)
func ZipWriter(files chan *os.File) *sync.WaitGroup {
f, err := os.Create("out.zip")
if err != nil {
panic(err)
}
var wg sync.WaitGroup
wg.Add(1)
zw := zip.NewWriter(f)
go func() {
// Note the order (LIFO):
defer wg.Done() // 2. signal that we're done
defer f.Close() // 1. close the file
var err error
var fw io.Writer
for f := range files {
// Loop until channel is closed.
if fw, err = zw.Create(f.Name()); err != nil {
panic(err)
}
io.Copy(fw, f)
if err = f.Close(); err != nil {
panic(err)
}
}
// The zip writer must be closed *before* f.Close() is called!
if err = zw.Close(); err != nil {
panic(err)
}
}()
return &wg
}
func main() {
files := make(chan *os.File)
wait := ZipWriter(files)
// Send all files to the zip writer.
var wg sync.WaitGroup
wg.Add(len(os.Args)-1)
for i, name := range os.Args {
if i == 0 {
continue
}
// Read each file in parallel:
go func(name string) {
defer wg.Done()
f, err := os.Open(name)
if err != nil {
panic(err)
}
files <- f
}(name)
}
wg.Wait()
// Once we're done sending the files, we can close the channel.
close(files)
// This will cause ZipWriter to break out of the loop, close the file,
// and unblock the next mutex:
wait.Wait()
}
Usage: go run example.go /path/to/*.log.
This is the order in which things should be happening:
Open output file for writing.
Create a zip.Writer with that file.
Kick off a goroutine listening for files on a channel.
Go through each file, this can be done in one goroutine per file.
Send each file to the goroutine created in step 3.
After processing each file in said goroutine, close the file to free up resources.
Once each file has been sent to said goroutine, close the channel.
Wait until the zipping has been done (which is done sequentially).
Once zipping is done (channel exhausted), the zip writer should be closed.
Only when the zip writer is closed, should the output file be closed.
Finally everything is closed, so close the sync.WaitGroup to tell the calling function that we're good to go. (A channel could also be used here, but sync.WaitGroup seems more elegant.)
When you get the signal from the zip writer that everything is properly closed, you can exit from main and terminate nicely.
This might not answer your question, but I've been using similar code to generate zip archives on-the-fly for a web service some time ago. It performed quite well, even though the actual zipping was done in a single goroutine. Overcoming the IO bottleneck can already be an improvement.
From the look of it, you won't be able to parallelise the compression using the standard library archive/zip package because:
Compression is performed by the io.Writer returned by zip.Writer.Create or CreateHeader.
Calling Create/CreateHeader implicitly closes the writer returned by the previous call.
So passing the writers returned by Create to multiple goroutines and writing to them in parallel will not work.
If you wanted to write your own parallel zip writer, you'd probably want to structure it something like this:
Have multiple goroutines compress files using the compress/flate module, and keep track of the CRC32 value and length of the uncompressed data. The output should be directed to temporary files. Note the compressed size of the data.
Once everything has been compressed, start writing the Zip file starting with the header.
Write out the file header followed by the contents of the corresponding temporary file for each compressed file.
Write out the central directory record and end record at the end of the file. All the required information should be available at this point.
For added parallelism, step 1 could be performed in parallel with the remaining steps by using a channel to indicate when compression of each file completes.
Due to the file format, you won't be able to perform parallel compression without either storing compressed data in memory or in temporary files.
With Go1.17, parallel compression and merging of zip files are possible using the archive/zip package.
An example is below. In the example, I create zip workers to create individual zip files and an entry provider worker which provides entries to be added to a zip file via a channel to zip workers. Actual files can be provided to the zip workers but I skipped that part.
package main
import (
"archive/zip"
"context"
"fmt"
"io"
"log"
"os"
"strings"
"golang.org/x/sync/errgroup"
)
const numOfZipWorkers = 10
type entry struct {
name string
rc io.ReadCloser
}
func main() {
log.SetFlags(log.LstdFlags | log.Lshortfile)
entCh := make(chan entry, numOfZipWorkers)
zpathCh := make(chan string, numOfZipWorkers)
group, ctx := errgroup.WithContext(context.Background())
for i := 0; i < numOfZipWorkers; i++ {
group.Go(func() error {
return zipWorker(ctx, entCh, zpathCh)
})
}
group.Go(func() error {
defer close(entCh) // Signal workers to stop.
return entryProvider(ctx, entCh)
})
err := group.Wait()
if err != nil {
log.Fatal(err)
}
f, err := os.OpenFile("output.zip", os.O_CREATE|os.O_TRUNC|os.O_WRONLY, 0644)
if err != nil {
log.Fatal(err)
}
zw := zip.NewWriter(f)
close(zpathCh)
for path := range zpathCh {
zrd, err := zip.OpenReader(path)
if err != nil {
log.Fatal(err)
}
for _, zf := range zrd.File {
err := zw.Copy(zf)
if err != nil {
log.Fatal(err)
}
}
_ = zrd.Close()
_ = os.Remove(path)
}
err = zw.Close()
if err != nil {
log.Fatal(err)
}
err = f.Close()
if err != nil {
log.Fatal(err)
}
}
func entryProvider(ctx context.Context, entCh chan<- entry) error {
for i := 0; i < 2*numOfZipWorkers; i++ {
select {
case <-ctx.Done():
return ctx.Err()
case entCh <- entry{
name: fmt.Sprintf("file_%d", i+1),
rc: io.NopCloser(strings.NewReader(fmt.Sprintf("content %d", i+1))),
}:
}
}
return nil
}
func zipWorker(ctx context.Context, entCh <-chan entry, zpathch chan<- string) error {
f, err := os.CreateTemp(".", "tmp-part-*")
if err != nil {
return err
}
zw := zip.NewWriter(f)
Loop:
for {
var (
ent entry
ok bool
)
select {
case <-ctx.Done():
err = ctx.Err()
break Loop
case ent, ok = <-entCh:
if !ok {
break Loop
}
}
hdr := &zip.FileHeader{
Name: ent.name,
Method: zip.Deflate, // zip.Store can also be used.
}
hdr.SetMode(0644)
w, e := zw.CreateHeader(hdr)
if e != nil {
_ = ent.rc.Close()
err = e
break
}
_, e = io.Copy(w, ent.rc)
_ = ent.rc.Close()
if e != nil {
err = e
break
}
}
if e := zw.Close(); e != nil && err == nil {
err = e
}
if e := f.Close(); e != nil && err == nil {
err = e
}
if err == nil {
select {
case <-ctx.Done():
err = ctx.Err()
case zpathch <- f.Name():
}
}
return err
}
I am trying to take input from the keyboard and then store it in a text file but I am a bit confused on how to actually do it.
My current code is as follow at the moment:
// reads the file txt.txt
bs, err := ioutil.ReadFile("text.txt")
if err != nil {
panic(err)
}
// Prints out content
textInFile := string(bs)
fmt.Println(textInFile)
// Standard input from keyboard
var userInput string
fmt.Scanln(&userInput)
//Now I want to write input back to file text.txt
//func WriteFile(filename string, data []byte, perm os.FileMode) error
inputData := make([]byte, len(userInput))
err := ioutil.WriteFile("text.txt", inputData, )
There are so many functions in the "os" and "io" packages. I am very confused about which one I actually should use for this purpose.
I am also confused about what the third argument in the WriteFile function should be. In the documentation is says of type " perm os.FileMode" but since I am new to programming and Go I am a bit clueless.
Does anybody have any tips on how to proced?
Thanks in advance,
Marie
// reads the file txt.txt
bs, err := ioutil.ReadFile("text.txt")
if err != nil { //may want logic to create the file if it doesn't exist
panic(err)
}
var userInput []string
var err error = nil
var n int
//read in multiple lines from user input
//until user enters the EOF char
for ln := ""; err == nil; n, err = fmt.Scanln(ln) {
if n > 0 { //we actually read something into the string
userInput = append(userInput, ln)
} //if we didn't read anything, err is probably set
}
//open the file to append to it
//0666 corresponds to unix perms rw-rw-rw-,
//which means anyone can read or write it
out, err := os.OpenFile("text.txt", os.O_APPEND, 0666)
defer out.Close() //we'll close this file as we leave scope, no matter what
if err != nil { //assuming the file didn't somehow break
//write each of the user input lines followed by a newline
for _, outLn := range userInput {
io.WriteString(out, outLn+"\n")
}
}
I've made sure this compiles and runs on play.golang.org, but I'm not at my dev machine, so I can't verify that it's interacting with Stdin and the file entirely correctly. This should get you started though.
For example,
package main
import (
"fmt"
"io/ioutil"
"os"
)
func main() {
fname := "text.txt"
// print text file
textin, err := ioutil.ReadFile(fname)
if err == nil {
fmt.Println(string(textin))
}
// append text to file
f, err := os.OpenFile(fname, os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0666)
if err != nil {
panic(err)
}
var textout string
fmt.Scanln(&textout)
_, err = f.Write([]byte(textout))
if err != nil {
panic(err)
}
f.Close()
// print text file
textin, err = ioutil.ReadFile(fname)
if err != nil {
panic(err)
}
fmt.Println(string(textin))
}
If you simply want to append the user's input to a text file, you could just read the
input as you've already done and use ioutil.WriteFile, as you've tried to do.
So you already got the right idea.
To make your way go, the simplified solution would be this:
// Read old text
current, err := ioutil.ReadFile("text.txt")
// Standard input from keyboard
var userInput string
fmt.Scanln(&userInput)
// Append the new input to the old using builtin `append`
newContent := append(current, []byte(userInput)...)
// Now write the input back to file text.txt
err = ioutil.WriteFile("text.txt", newContent, 0666)
The last parameter of WriteFile is a flag which specifies the various options for
files. The higher bits are options like file type (os.ModeDir, for example) and the lower
bits represent the permissions in form of UNIX permissions (0666, in octal format, stands for user rw, group rw, others rw). See the documentation for more details.
Now that your code works, we can improve it. For example by keeping the file open
instead of opening it twice:
// Open the file for read and write (O_RDRW), append to it if it has
// content, create it if it does not exit, use 0666 for permissions
// on creation.
file, err := os.OpenFile("text.txt", os.O_RDWR|os.O_APPEND|os.O_CREATE, 0666)
// Close the file when the surrounding function exists
defer file.Close()
// Read old content
current, err := ioutil.ReadAll(file)
// Do something with that old content, for example, print it
fmt.Println(string(current))
// Standard input from keyboard
var userInput string
fmt.Scanln(&userInput)
// Now write the input back to file text.txt
_, err = file.WriteString(userInput)
The magic here is, that you use the flag os.O_APPEND while opening the file,
which makes file.WriteString() append. Note that you need to close the file after
opening it, which we do after the function exists using the defer keyword.