Close the file before rename it in golang - go

When I do some file operations with golang, I firstly open a file and add the close() into defer list, then I try to rename that file. If I close the file manually, the defer will close it again. If I wait for the defer to close it, the rename will cause error because it is not closed yey. Code as below
func main() {
pfile1, _ := os.Open("myfile.log")
defer pfile1.Close() //It will be closed again.
...
...
pfile1.Close() //I have to close it before rename it.
os.Rename("myfile.log", "myfile1.log")
}
I found some ugly solution, such as create another function to separate the open file, does any better solution that below?
func main() {
var pfile1 *os.File
ugly_solution(pfile1)
os.Rename("myfile.log", "myfile1.log")
}
func ugly_solution(file *os.File) {
file, _ = os.Open("myfile.log")
defer file.Close()
}

There are a few things that are not clear to me about your code.
First of all, why do you open the file before renaming it? This is not required by the os.Rename function. The function takes two strings representing the old and new file name, there is no need to pass a file pointer.
func main() {
...
...
os.Rename("myfile.log", "myfile1.log")
}
Assuming you need to make changes to the file content (which doesn't seem to be the case given the ugly_solution method) and you have to open the file, then why deferring file.Close()? You don't have to defer the method if you need it to be called explicitly somewhere in the same method. Simply call it.
func main() {
pfile1, _ := os.Open("myfile.log")
...
...
pfile1.Close()
os.Rename("myfile.log", "myfile1.log")
}

You can put both closing and renaming the file in the defer:
func main() {
pfile1, _ := os.Open("myfile.log")
defer func(){
pfile1.Close()
os.Rename("myfile.log", "myfile1.log")
}()
...
...
}

In situation like in your sample
Maybe you want to follow this scenario:
Create easily an identifiable temporary file.
Write the data.
Close the file.
If successful rename the file.
In that case where you want to follow OS system action of underlying files maybe you want to simply not deferring the close on IO.file since you want to get the Error returned by the close function itself.
Also, in that case you maybe want to operate a file.sync() too.
See https://www.joeshaw.org/dont-defer-close-on-writable-files/

Related

pause N goroutines inside handlerFunc

currently im implementing a caching system using std lib http/net.
An endpoint parses a key and validates the request using the isOK(key) function. If it is not okay, one routine is send to makeSureNowOK(key,edpoint) to make sure, isOk(key) will return true at the next request.
My simplified solution looks as follows:
func (ep *Endpoint) Handler() func(...) {
for {
ep.mu.Lock()
// WAITINGROOM //
//lint:ignore SA2001 empty critical section
ep.mu.Unlock()
bytesBody, err := isOK(key)
if err != nil {
select {
case <-ep.pause:
go makeSureNowOK(key)
default:
}
} else {
...
return
}
}
}
func makeSureNowOK(key string, ep ...) {
ep.mu.Lock()
... do validation ..
ep.pause <- struct{}{}
ep.mu.Unlock()
}
So I'm using a mutex to block further executions and a channel using select to catch back routines that passed the isOK function.
Another Idea to not use mutex is to use a closed channel to allow routines to pass. But then I have to recreate it, to block routines. That feels somewhat hacky.
How would you approach this problem?
Edit: To make my question more clear: The code above is working like so. But I feel like creating a "Waitingroom" by calling .Unlock() immediately after .Lock() is not a clean way to achieve this. Do you have other suggestions?
An alternative way would be to use sync waitgroup, but then I'd have to call waitgroup.Wait (where right now im un/locking the mutex which will be before waitgroup.Add which is aswell bad.

Defer to outside a function

A common pattern I use is:
resource.open()
defer resource.close()
sometimes checking errors in between, which leads to:
err := resource.open()
if err != nil{
//do error stuff and return
}
defer resource.close()
Sometimes I will need multiple open/close resources in a row, leading to a variation of the previous 5 lines to be repeated one after another. This variation may be repeated verbatim several times in my code (where I need all the same resources).
It would be wonderful to wrap all this in a function. However doing so would close the resource as soon as the function call is over. Is there any way around this - either deferring to a "level up" the call stack or some other way?
One way to do this is using an "initializer" function with callback:
func WithResources(f func(Resource1, Resource2)) {
r1:=NewResource1()
defer r1.Close()
r2:=NewResource2()
defer r2.Close()
f(r1,r2)
}
func F() {
WithResources(func(r1 Resource1, r2 Resource2) {
// Use r1, r2
})
}
The signature of the function f depends on your exact use case.
Another way is to use a struct for a resource set:
type Resources struct {
R1 Resource1
R2 Resource2
...
}
func NewResources() *Resources {
r:=&Resources{}
r.R1=NewR1()
r.R2=NewR2()
return r
}
func (r *Resources) Close() {
r.R1.Close()
r.R2.Close()
}
func f() {
r:=NewResources()
defer r.Close()
...
}
It would be wonderful to wrap all this in a function.
Most probably a lot of people would hate reading such code. So "wonderful" might be very subjective.
However doing so would close the resource as soon as the function call is over.
Exactly.
Is there any way around this [...]?
No.

Does the file need to be closed?

Using golang gin, I read file data using:
file, fileHeader, err:=ctx.Request.FormFile("blabla...")
Do I need to write this?
defer file.Close()
I jump to the source code, it says:
// Open opens and returns the FileHeader's associated File.
func (fh *FileHeader) Open() (File, error) {
if b := fh.content; b != nil {
r := io.NewSectionReader(bytes.NewReader(b), 0, int64(len(b)))
fmt.Printf("TODDLINE:152\n")
fmt.Printf("TODDLINE:154:fmpfile:%#v\n", fh.tmpfile)
fmt.Printf("TODDLINE:154:Filename:%#v\n", fh.Filename)
return sectionReadCloser{r}, nil
}
fmt.Printf("TODDLINE:155\n")
return os.Open(fh.tmpfile)
}
If it uses os.Open, I guess I must close the file, but if it retuns the sectionReadCloser{r}, the Close function shows like this:
func (rc sectionReadCloser) Close() error {
return nil
}
The close function of seciontReadCloser doesn't do anything.
And I find that it does return the sectionReadCloser{r}.
I guess I should close the file, but I still want to know when it will return the os.Open. I will keep going to read the source code and try to understand it. It would be nice if someone gives me some advice.
If the file returned implements io.Closer (i.e., if it has a Close method), assume you are responsible for closing it unless the documentation explicitly states otherwise.

Write fixed length padded lines to file Go

For printing, justified and fixed length, seems like what everyone asks about and there are many examples that I have found, like...
package main
import "fmt"
func main() {
values := []string{"Mustang", "10", "car"}
for i := range(values) {
fmt.Printf("%10v...\n", values[i])
}
for i := range(values) {
fmt.Printf("|%-10v|\n", values[i])
}
}
Situation
But what if I need to WRITE to a file with fixed length bytes?
For example: what if I have requirement that states, write this line to a file that must be 32 bytes, left justified and padded to the right with 0's
Question
So, how do you accomplish this when writing to a file?
There are analogous functions to fmt.PrintXX() functions, ones that start with an F, take the form of fmt.FprintXX(). These variants write the result to an io.Writer which may be an os.File as well.
So if you have the fmt.Printf() statements which you want to direct to a file, just change them to call fmt.Fprintf() instead, passing the file as the first argument:
var f *os.File = ... // Initialize / open file
fmt.Fprintf(f, "%10v...\n", values[i])
If you look into the implementation of fmt.Printf():
func Printf(format string, a ...interface{}) (n int, err error) {
return Fprintf(os.Stdout, format, a...)
}
It does exactly this: it calls fmt.Fprintf(), passing os.Stdout as the output to write to.
For how to open a file, see How to read/write from/to file using Go?
See related question: Format a Go string without printing?

Does closing io.PipeWriter close the underlying file?

I am using logrus for logging and have a few custom format loggers. Each is initialized to write to a different file like:
fp, _ := os.OpenFile(path, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0755)
// error handling left out for brevity
log.Out = fp
Later in the application, I need to change the file the logger is writing to (for a log rotation logic). What I want to achieve is to properly close the current file before changing the logger's output file. But the closest thing to the file handle logrus provides me is a Writer() method that returns a io.PipeWriter pointer. So would calling Close() on the PipeWriter also close the underlying file?
If not, what are my options to do this, other than keeping the file pointer stored somewhere.
For the record, twelve-factor tells us that applications should not concern themselves with log rotation. If and how logs are handled best depends on how the application is deployed. Systemd has its own logging system, for instance. Writing to files when deployed in (Docker) containers is annoying. Rotating files are annoying during development.
Now, pipes don't have an "underlying file". There's a Reader end and a Writer end, and that's it. From the docs for PipeWriter:
Close closes the writer; subsequent reads from the read half of the pipe will return no bytes and EOF.
So what happens when you close the writer depends on how Logrus handles EOF on the Reader end. Since Logger.Out is an io.Writer, Logrus cannot possibly call Close on your file.
Your best bet would be to wrap *os.File, perhaps like so:
package main
import "os"
type RotatingFile struct {
*os.File
rotate chan struct{}
}
func NewRotatingFile(f *os.File) RotatingFile {
return RotatingFile{
File: f,
rotate: make(chan struct{}, 1),
}
}
func (r RotatingFile) Rotate() {
r.rotate <- struct{}{}
}
func (r RotatingFile) doRotate() error {
// file rotation logic here
return nil
}
func (r RotatingFile) Write(b []byte) (int, error) {
select {
case <-r.rotate:
if err := r.doRotate(); err != nil {
return 0, err
}
default:
}
return r.File.Write(b)
}
Implementing log file rotation in a robust way is surprisingly tricky. For instance, closing the old file before creating the new one is not a good idea. What if the log directory permissions changed? What if you run out of inodes? If you can't create a new log file you may want to keep writing to the current file. Are you okay with ripping lines apart, or do you only want to rotate after a newline? Do you want to rotate empty files? How do you reliably remove old logs if someone deletes the N-1th file? Will you notice the Nth file or stop looking at the N-2nd?
The best advice I can give you is to leave log rotation to the pros. I like svlogd (part of runit) as a standalone log rotation tool.
The closing of io.PipeWriter will not affect actual Writer behind it. The chain of close execution:
PipeWriter.Close() -> PipeWriter.CloseWithError(err error) ->
pipe.CloseWrite(err error)
and it doesn't influence underlying io.Writer.
To close actual writer you need just to close Logger.Out that is an exported field.

Resources