Modify ServeContent file on the fly - go

I'm writing a simple web server to serve static files. Any HTML file being served needs to be modified "on the go" to include some HTML just before its closing </body> tag.
I achieved it with the below code and it works, however is there perhaps a more efficient way of doing it? I'm beginner in Go and this code needs to be super performant.
// error handling etc omitted for brevity
dir := http.Dir("my/path")
content, _ := dir.Open("my_file")
var bodyBuf strings.Builder
var contentBuf *bytes.Buffer
io.Copy(&bodyBuf, content)
defer content.Close()
if strings.HasSuffix("some/web/uri", ".html") {
new_html_content := "<whatever></body>"
bodyRpld := strings.Replace(bodyBuf.String(), "</body>", new_html_content, 1)
contentBuf = bytes.NewBuffer([]byte(bodyRpld))
} else {
contentBuf = bytes.NewBuffer([]byte(bodyBuf.String()))
}
d, _ := content.Stat()
http.ServeContent(w, r, "my/path", d.ModTime(), bytes.NewReader(contentBuf.Bytes()))
Thanks!

To avoid creating large buffers for files that do no match your file-match pattern *.html, I would suggest using an io.Reader mechanism to pass-through files that you want to serve untouched. This avoids loading into memory potentially large assets (e.g. 100MB non-html video files).
For files that do match your html check - your string-replace is probably fine as .html are typically small in size.
So try something like this:
dir := http.Dir("my/path")
content, err := dir.Open("my_file") // check error
var r io.ReadSeeker // for http.ServeContent needs
if !strings.HasSuffix("some/web/uri", ".html") {
r = content // pass-through file content (avoid memory allocs)
} else {
// similar to what you had before
b := new(bytes.Buffer)
n, err := b.ReadFrom(content) // check err
defer content.Close()
new_html_content := "<whatever></body>"
newContent := strings.Replace(b.String(),
"</body>", new_html_content, 1)
r = bytes.NewReader([]byte(newContent))
}
d, _ := content.Stat()
http.ServeContent(w, r, "my/path", d.ModTime(), r)

Related

Referencing a file several levels up and down in a hierarchical structure in go

I'm trying to use os.open(fileDir) to read a file, then upload that file to an s3 bucket. Here's what I have so far.
func addFileToS3(s *session.Session, fileDir string) error {
file, err := os.Open(fileDir)
if err != nil {
return err
}
defer file.Close()
// Get file size and read the file content into a buffer
fileInfo, _ := file.Stat()
var size int64 = fileInfo.Size()
buffer := make([]byte, size)
file.Read(buffer)
// code to upload to s3
return nil
My directory structure is like
|--project-root
|--test
|--functional
|--assets
|-- good
|--fileINeed
But my code is running inside
|--project-root
|--services
|--service
|--test
|--myGoCode
How do a I pass in the correct fileDir? I need a solution that works locally and when the code gets deployed. I looked at the package path/filepath but I wasn't sure whether to get the absolute path first, then go down the hierarchy or something else.
You can add the following small function to get the expected file path.
var (
_, file, _, _ = runtime.Caller(0)
baseDir = filepath.Dir(file)
projectDir = filepath.Join(baseDir, "../../../")
)
func getFileINeedDirectory() string {
fileINeedDir := path.Join(projectDir, "test/functional/assets/good/fileINeed")
return fileINeedDir // project-dir/test/functional/assets/good/fileINeed
}

Multipart upload to s3 while writing to reader

I've found a few questions that are similar to mine, but nothing that answers my specific question.
I want to upload CSV data to s3. My basic code is along the lines of (I've simplified getting the data for brevity, normally it's reading from a database):
reader, writer := io.Pipe()
go func() {
cWriter := csv.NewWriter(pWriter)
for _, line := range lines {
cWriter.Write(line)
}
cWriter.Flush()
writer.Close()
}()
sess := session.New(//...)
uploader := s3manager.NewUploader(sess)
result, err := uploader.Upload(&s3manager.UploadInput{
Body: reader,
//...
})
The way I understand it, the code will wait for writing to finish and then will upload the contents to s3, so I end up with the full contents of the file in memory. Is it possible to chunk the upload (possibly using the s3 multipart upload?) so that for larger uploads, I'm only storing part of the data in memory at any one time?
The uploader is supported multipart upload if I had read source code of the uploader in right way: https://github.com/aws/aws-sdk-go/blob/master/service/s3/s3manager/upload.go
The minimum size of an uploaded part is 5 Mb.
// MaxUploadParts is the maximum allowed number of parts in a multi-part upload
// on Amazon S3.
const MaxUploadParts = 10000
// MinUploadPartSize is the minimum allowed part size when uploading a part to
// Amazon S3.
const MinUploadPartSize int64 = 1024 * 1024 * 5
// DefaultUploadPartSize is the default part size to buffer chunks of a
// payload into.
const DefaultUploadPartSize = MinUploadPartSize
u := &Uploader{
PartSize: DefaultUploadPartSize,
MaxUploadParts: MaxUploadParts,
.......
}
func (u Uploader) UploadWithContext(ctx aws.Context, input *UploadInput, opts ...func(*Uploader)) (*UploadOutput, error) {
i := uploader{in: input, cfg: u, ctx: ctx}
.......
func (u *uploader) nextReader() (io.ReadSeeker, int, error) {
.............
switch r := u.in.Body.(type) {
.........
default:
part := make([]byte, u.cfg.PartSize)
n, err := readFillBuf(r, part)
u.readerPos += int64(n)
return bytes.NewReader(part[0:n]), n, err
}
}

Go - How to render the template to a temporary byte buffer using Pongo2?

I am trying to send HTML emails using Golang, but instead of using the native Golang html/template package I am trying to do it with Pongo2.
In this question: Is it possible to create email templates with CSS in Google App Engine Go?
The user is providing this example, which is using the html/template
var tmpl = template.Must(template.ParseFiles("templates/email.html"))
buff := new(bytes.Buffer)
if err = tmpl.Execute(buff, struct{ Name string }{"Juliet"}); err != nil {
panic(err.Error())
}
msg := &mail.Message{
Sender: "romeo#montague.com",
To: []string{"Juliet <juliet#capulet.org>"},
Subject: "See you tonight",
Body: "...you put here the non-HTML part...",
HTMLBody: buff.String(),
}
c := appengine.NewContext(r)
if err := mail.Send(c, msg); err != nil {
c.Errorf("Alas, my user, the email failed to sendeth: %v", err)
What I am trying to do
var tmpl = pongo2.Must(pongo2.FromFile("template.html"))
buff := new(bytes.Buffer)
tmpl.Execute(buff, pongo2.Context{"data": "best-data"}, w)
The problem here is that pongo2.Execute() only allows to enter the context data and not the buff.
My end goal is to be able to write my templates using Pongo2, and I can render the HTML in a way where I can also use it for sending my emails.
My question is what I am doing it wrong? It's possible what I am trying to achieve? If I can find a way to render that HTML into a buff, I can use it later as part buff.String(), which will allow me to enter it in HTML body.
Use ExecuteWriterUnbuffered instead of Execute:
tmpl.ExecuteWriterUnbuffered(pongo2.Context{"data": "best-data"}, &buff)
Not sure what w is doing in your example. If it's another Writer that you'd like to write too, you can use an io.MultiWriter.
// writes to w2 will go to both buff and w
w2 := io.MultiWriter(&buff, w)

How to append files (io.Reader)?

func SimpleUploader(r *http.Request, w http.ResponseWriter) {
// temp folder path
chunkDirPath := "./creatives/.uploads/" + userUUID
// create folder
err = os.MkdirAll(chunkDirPath, 02750)
// Get file handle from multipart request
var file io.Reader
mr, err := r.MultipartReader()
var fileName string
// Read multipart body until the "file" part
for {
part, err := mr.NextPart()
if err == io.EOF {
break
}
if part.FormName() == "file" {
file = part
fileName = part.FileName()
fmt.Println(fileName)
break
}
}
// Create files
tempFile := chunkDirPath + "/" + fileName
dst, err := os.Create(tempFile)
defer dst.Close()
buf := make([]byte, 1024*1024)
file.Read(buf)
// write/save buffer to disk
ioutil.WriteFile(tempFile, buf, os.ModeAppend)
if http.DetectContentType(buf) != "video/mp4" {
response, _ := json.Marshal(&Response{"File upload cancelled"})
settings.WriteResponse(w, http.StatusInternalServerError, response)
return
}
// joinedFile := io.MultiReader(bytes.NewReader(buf), file)
_, err = io.Copy(dst, file)
if err != nil {
settings.LogError(err, methodName, "Error copying file")
}
response, _ := json.Marshal(&Response{"File uploaded successfully"})
settings.WriteResponse(w, http.StatusInternalServerError, response)
}
I am uploading a Video file.
Before uploading the entire file I want to do some checks so I save the first 1mb to a file :
buf := make([]byte, 1024*1024)
file.Read(buf)
// write/save buffer to disk
ioutil.WriteFile(tempFile, buf, os.ModeAppend)
Then if the checks pass I want to upload the rest of the file dst is the same file used to save the 1st 1 mb so basically i am trying to append to the file :
_, err = io.Copy(dst, file)
The uploaded file size is correct but the file is corrupted(can't play the video).
What else have I tried? : Joining both the readers and saving to a new file. But with this approach the file size increases by 1 mb and is corrupted.
joinedFile := io.MultiReader(bytes.NewReader(buf), file)
_, err = io.Copy(newDst, joinedFile)
Kindly help.
You've basically opened the file twice by doing os.Create and ioutil.WriteFile
the issue being is that os.Create's return value (dst) is like a pointer to the beginning of that file. WriteFile doesn't move where dst points to.
You are basically doing WriteFile, then io.Copy on top of the first set of bytes WriteFile wrote.
Try doing WriteFile first (with Create flag), and then os.OpenFile (instead of os.Create) that same file with Append flag to append the remaining bytes to the end.
Also, it's extremely risky to allow a client to give you the filename as it could be ../../.bashrc (for example), to which you'd overwrite your shell init with whatever the user decided to upload.
It would be much safer if you computed a filename yourself, and if you need to remember the user's selected filename, store that in your database or even a metadata.json type file that you load later.

How add a file to an existing zip file using Golang

We can create a zip new file and add files using Go Language.
But, how to add a new file with existing zip file using GoLang?
If we can use Create function, how to get the zip.writer reference?
Bit confused.
After more analysis, i found that, it is not possible to add any files with the existing zip file.
But, I was able to add files with tar file by following the hack given in this URL.
you can:
copy old zip items into a new zip file;
add new files into the new zip file;
zipReader, err := zip.OpenReader(zipPath)
targetFile, err := os.Create(targetFilePath)
targetZipWriter := zip.NewWriter(targetFile)
for _, zipItem := range zipReader.File {
zipItemReader, err := zipItem.Open()
header, err := zip.FileInfoHeader(zipItem.FileInfo())
header.Name = zipItem.Name
targetItem, err := targetZipWriter.CreateHeader(header)
_, err = io.Copy(targetItem, zipItemReader)
}
addNewFiles(targetZipWriter) // IMPLEMENT YOUR LOGIC
Although I have not attempted this yet with a zip file that already exists and then writing to it, I believe you should be able to add files to it.
This is code I have written to create a conglomerate zip file containing multiple files in order to expedite uploading the data to another location. I hope it helps!
type fileData struct {
Filename string
Body []byte
}
func main() {
outputFilename := "path/to/file.zip"
// whatever you want as filenames and bodies
fileDatas := createFileDatas()
// create zip file
conglomerateZip, err := os.Create(outputFilename)
if err != nil {
return err
}
defer conglomerateZip.Close()
zipWriter := zip.NewWriter(conglomerateZip)
defer zipWriter.Close()
// populate zip file with multiple files
err = populateZipfile(zipWriter, fileDatas)
if err != nil {
return err
}
}
func populateZipfile(w *zip.Writer, fileDatas []*fileData) error {
for _, fd := range fileDatas {
f, err := w.Create(fd.Filename)
if err != nil {
return err
}
_, err = f.Write([]byte(fd.Body))
if err != nil {
return err
}
err = w.Flush()
if err != nil {
return err
}
}
return nil
}
This is a bit old and already has an answer, but if performance isn't a key concern for you (making the zip file isn't on a hot path for example) you can do this with the archive/zip library by creating a new writer and copying the existing files into it then adding your new content. Something like this:
zw := // new zip writer from buffer or temp file
newFileName := // file name to add
reader, _ := zip.NewReader(bytes.NewReader(existingFile), int64(len(existingFile)))
for _, file := range reader.File {
if file.Name == newFileName {
continue // don't copy the old file over to avoid duplicates
}
fw, _ := zw.Create(file.Name)
fr, _ := file.Open()
io.Copy(fw, fr)
fr.Close()
}
Then you would return the new writer and append files as needed. If you aren't sure which files might overlap you can turn that if check into a function with a list of file names you will eventually add. You can also use this logic to remove a file from an existing archive.
Now in 2021, there is still no support for appending files to an existing archive.
But at least it is now possible to add already-compressed files, i.e. we don't anymore have to decompress & re-compress files when duplicating them from old archive to new one.
(NOTE: this only applies to Go 1.17+)
So, based on examples by #wongoo and #Michael, here is how I would implement appending files now with the minimum performance overhead (you'll want to add error handling though):
zr, err := zip.OpenReader(zipPath)
defer zr.Close()
zwf, err := os.Create(targetFilePath)
defer zwf.Close()
zw := zip.NewWriter(zwf)
defer zwf.Close() // or not... since it will try to wrote central directory
for _, zipItem := range zrw.File {
if isOneOfNamesWeWillAdd(zipItem.Name) {
continue // avoid duplicate files!
}
zipItemReader, err := zipItem.OpenRaw()
header := zipItem.FileHeader // clone header data
targetItem, err := targetZipWriter.CreateRaw(&header) // use cloned data
_, err = io.Copy(targetItem, zipItemReader)
}
addNewFiles(zw) // IMPLEMENT YOUR LOGIC

Resources