If I use p4 move to move a large file (or large number of files), such that they are tagged move/delete and move/add in perforce, when other clients get that revision, do they have to redownload those files? Or will it recognise that it can just move those files locally.
If that is not the default behaviour is there any way to achieve it? Or is the best bet to move the files manually and do a reconcile?
The files will be re-downloaded, at least as of the server version I'm on. The only special-case optimization I'm aware of to sidestep downloading content for a newly synced revision is the case where the client file has been remapped to a different (identical) depot file. Syncing a moved file is handled as if the old file were deleted and the new file were added:
C:\Perforce\test\move>p4 -vrpc=3 sync ... | grep "RpcRecvBuffer func"
RpcRecvBuffer func = protocol
RpcRecvBuffer func = client-Crypto
RpcRecvBuffer func = flush1
RpcRecvBuffer func = client-Message
RpcRecvBuffer func = client-DeleteFile
RpcRecvBuffer func2 = B719AE71314697DE0C0C3AC29A76903D
RpcRecvBuffer func = client-Ack
RpcRecvBuffer func = client-Message
RpcRecvBuffer func = client-OpenFile
RpcRecvBuffer func = client-WriteFile
RpcRecvBuffer func = client-CloseFile
RpcRecvBuffer func2 = B719AE71314697DE0C0C3AC29A76903D
RpcRecvBuffer func = client-Ack
RpcRecvBuffer func = flush1
RpcRecvBuffer func = release
Doing a manual move followed by reconcile does indeed sound like the best workaround if you have a lot of moved files and you want to avoid the download.
Related
I am using logrus for logging and have a few custom format loggers. Each is initialized to write to a different file like:
fp, _ := os.OpenFile(path, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0755)
// error handling left out for brevity
log.Out = fp
Later in the application, I need to change the file the logger is writing to (for a log rotation logic). What I want to achieve is to properly close the current file before changing the logger's output file. But the closest thing to the file handle logrus provides me is a Writer() method that returns a io.PipeWriter pointer. So would calling Close() on the PipeWriter also close the underlying file?
If not, what are my options to do this, other than keeping the file pointer stored somewhere.
For the record, twelve-factor tells us that applications should not concern themselves with log rotation. If and how logs are handled best depends on how the application is deployed. Systemd has its own logging system, for instance. Writing to files when deployed in (Docker) containers is annoying. Rotating files are annoying during development.
Now, pipes don't have an "underlying file". There's a Reader end and a Writer end, and that's it. From the docs for PipeWriter:
Close closes the writer; subsequent reads from the read half of the pipe will return no bytes and EOF.
So what happens when you close the writer depends on how Logrus handles EOF on the Reader end. Since Logger.Out is an io.Writer, Logrus cannot possibly call Close on your file.
Your best bet would be to wrap *os.File, perhaps like so:
package main
import "os"
type RotatingFile struct {
*os.File
rotate chan struct{}
}
func NewRotatingFile(f *os.File) RotatingFile {
return RotatingFile{
File: f,
rotate: make(chan struct{}, 1),
}
}
func (r RotatingFile) Rotate() {
r.rotate <- struct{}{}
}
func (r RotatingFile) doRotate() error {
// file rotation logic here
return nil
}
func (r RotatingFile) Write(b []byte) (int, error) {
select {
case <-r.rotate:
if err := r.doRotate(); err != nil {
return 0, err
}
default:
}
return r.File.Write(b)
}
Implementing log file rotation in a robust way is surprisingly tricky. For instance, closing the old file before creating the new one is not a good idea. What if the log directory permissions changed? What if you run out of inodes? If you can't create a new log file you may want to keep writing to the current file. Are you okay with ripping lines apart, or do you only want to rotate after a newline? Do you want to rotate empty files? How do you reliably remove old logs if someone deletes the N-1th file? Will you notice the Nth file or stop looking at the N-2nd?
The best advice I can give you is to leave log rotation to the pros. I like svlogd (part of runit) as a standalone log rotation tool.
The closing of io.PipeWriter will not affect actual Writer behind it. The chain of close execution:
PipeWriter.Close() -> PipeWriter.CloseWithError(err error) ->
pipe.CloseWrite(err error)
and it doesn't influence underlying io.Writer.
To close actual writer you need just to close Logger.Out that is an exported field.
I am trying to implement the function mkdir in a fuse, written in Go,and I'm using Bazil library. I have successfully implemented a simple read-only fs, and I now want to be able to call mkdir inside any existing directory to make a new one.
I have made sure that all the existing directories are writable,(attr.Mode = os.ModeDir | 0777).
Right now I have just added the function:
func (d Dir) MkDir(ctx context.Context, req *fuse.MkdirRequest) (fs.Node, error) {
dir := &Dir{name: req.Name, files: 0, inode: 10 /*a random inode*/,mode: os.FileMode(0777),nextdir: nil, nextfile: nil}
d.nextdir = dir
return dir, nil
}
in my own implementation of the hello fs example of Bazil's library. But that doesn't seem to make any difference.
When I call mkdir new_dir_name from the terminal, I get the error: "mkdir: cannot create directory ‘new_dir_name’: Operation not permitted", even though I have added the mkdir function.
Any insights as to why this is happening, and what else should I add to my code to make this working would be great. Also, this is my first stackoverflow question, so I'm sorry if I didn't ask in a clear way.
the correct function for make directory like this
func (d *Dir) Mkdir(ctx context.Context, req *fuse.MkdirRequest) (fs.Node, error) {
}
When I do some file operations with golang, I firstly open a file and add the close() into defer list, then I try to rename that file. If I close the file manually, the defer will close it again. If I wait for the defer to close it, the rename will cause error because it is not closed yey. Code as below
func main() {
pfile1, _ := os.Open("myfile.log")
defer pfile1.Close() //It will be closed again.
...
...
pfile1.Close() //I have to close it before rename it.
os.Rename("myfile.log", "myfile1.log")
}
I found some ugly solution, such as create another function to separate the open file, does any better solution that below?
func main() {
var pfile1 *os.File
ugly_solution(pfile1)
os.Rename("myfile.log", "myfile1.log")
}
func ugly_solution(file *os.File) {
file, _ = os.Open("myfile.log")
defer file.Close()
}
There are a few things that are not clear to me about your code.
First of all, why do you open the file before renaming it? This is not required by the os.Rename function. The function takes two strings representing the old and new file name, there is no need to pass a file pointer.
func main() {
...
...
os.Rename("myfile.log", "myfile1.log")
}
Assuming you need to make changes to the file content (which doesn't seem to be the case given the ugly_solution method) and you have to open the file, then why deferring file.Close()? You don't have to defer the method if you need it to be called explicitly somewhere in the same method. Simply call it.
func main() {
pfile1, _ := os.Open("myfile.log")
...
...
pfile1.Close()
os.Rename("myfile.log", "myfile1.log")
}
You can put both closing and renaming the file in the defer:
func main() {
pfile1, _ := os.Open("myfile.log")
defer func(){
pfile1.Close()
os.Rename("myfile.log", "myfile1.log")
}()
...
...
}
In situation like in your sample
Maybe you want to follow this scenario:
Create easily an identifiable temporary file.
Write the data.
Close the file.
If successful rename the file.
In that case where you want to follow OS system action of underlying files maybe you want to simply not deferring the close on IO.file since you want to get the Error returned by the close function itself.
Also, in that case you maybe want to operate a file.sync() too.
See https://www.joeshaw.org/dont-defer-close-on-writable-files/
This might be a very amateur question. I'm trying to embed static files into binary, ie. html. How do I do that with https://github.com/jteeuwen/go-bindata?
So I can access an asset with this https://github.com/jteeuwen/go-bindata#accessing-an-asset, but what do I do with "data", and how to do I parse files, execute template, and serve them in the directory?
I couldn't find any examples online, and will appreciate some help!
5/6 years later, this should be easier with Go 1.16 (Q1 2021), which adds support for embedded files (issue/proposal 41191 )
It will be permitted to use //go:embed naming a single file to initialize a plain string or []byte variable:
//go:embed gopher.png
var gopherPNG []byte
The import is required to flag the file as containing //go:embed lines and needing processing.
Goimports (and gopls etc) can be taught this rule and automatically add the import in any file with a //go:embed as needed.
That sparked a debate on issue 42328 about how to avoid surprising inclusion of "hidden" files when using //go:embed
This as resolved in CL 275092 and commit 37588ff
Decision to exclude files matching .* and _* from embedded directory results when embedding an entire directory tree.
See src/embed/internal/embedtest/embed_test.go
//go:embed testdata/k*.txt
var local embed.FS
testFiles(t, local, "testdata/ken.txt", "If a program is too slow, it must have a loop.\n")
//go:embed testdata/k*.txt
var s string
testString(t, s, "local variable s", "If a program is too slow, it must have a loop.\n")
//go:embed testdata/h*.txt
var b []byte
testString(t, string(b), "local variable b", "hello, world\n")
Note: with CL 281492, cmd/go passes embedcfg to gccgo if supported.
See also (Jan. 2021) issue 43854 "opt-in for //go:embed to not ignore files and empty dirs".
Given a directory structure like so:
example/
main.go
data/hi.html
example/main.go
package main
import (
"html/template"
"log"
"net/http"
"os"
)
var tmpl *template.Template
func init() {
data, err := Asset("data/hi.html")
if err != nil {
log.Fatal(err)
}
tmpl = template.Must(template.New("tmpl").Parse(string(data)))
}
func main() {
// print to stdout
tmpl.Execute(os.Stdout, map[string]string{"Name": "James"})
http.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
tmpl.Execute(w, map[string]string{"Name": "James"})
})
log.Fatal(http.ListenAndServe(":8000", nil))
}
example/data/hi.html
<h1>Hi, {{.Name}}</h1>
run like so:
go-bindata data && go build && ./example
Console Output:
<h1>Hi, James</h1>
HTTP output:
Hi, James
I want to implement a webdav-server with Go and found a new "x" package here:
But I don't know how to use this package to get it done.
Can someone help me with this issue?
I tried this:
func main(){
fs := new(webdav.FileSystem)
ls := new(webdav.LockSystem)
h := new(webdav.Handler)
h.FileSystem = *fs
h.LockSystem = *ls
//then use the Handler.ServeHTTP Method as the http.HandleFunc
http.HandleFunc("/", h.ServeHTTP)
http.ListenAndServe(":5555", nil)
}
If I try to connect to the server, I get an internal server error.
What am I doing wrong?
Thanks for your help.
The x/net/webdav is still in early phase of development. Many critical parts are still being implemented, and it can not be used as such at this moment. Taking a look at the source code over half of the necessary structures and functions are still completely missing.
Unfortunately there are no Go based webdav server implementations at this moment. (In case someone can correct me, please feel free to do so!)
func main(){
fs := new(webdav.FileSystem)
ls := new(webdav.LockSystem)
h := new(webdav.Handler)
h.FileSystem = fs
h.LockSystem = ls
//then use the Handler.ServeHTTP Method as the http.HandleFunc
http.HandleFunc("/", h.ServeHTTP)
http.ListenAndServe(":5555", nil)
}
try to remove the * before "fs" and "ls" because they are already pointers.
NB : if you have to assign pointer use & and not *
Create a webdav server on http://localhost:8080/ which mounts the folder C:\myfiles.
package main
import (
"net/http"
"golang.org/x/net/webdav"
)
func main() {
handler := &webdav.Handler{
FileSystem: webdav.Dir(`C:\myfiles`),
LockSystem: webdav.NewMemLS(),
}
http.ListenAndServe("localhost:8080", handler)
}
Mount to Letter E: in windows:
net use e: http://localhost:8080/
Open mounted drive in explorer
explorer.exe e: