so if I've created a temp file in temp directory, used it and now I need to remove it(or them), should I call first file.Close() and then os.RemoveAll, or if I call os.RemoveAll it is unnecessary to close files? Is then file descriptor freed?
On linux removing a file causes its name to be removed from the file system but the block of storage will remain on disk while you still have an open file descriptor and removed only once that file descriptor (and any other file descriptors opened on that file) is closed. See https://linux.die.net/man/2/unlink
In go, the open file descriptor will not be closed just because you call os.RemoveAll() on a directory that contains the file.
I believe microsoft windows works differently: I think you will get an error when you try to remove a file that's currently being written to. That could be wrong, I'm no expert on Windows. But again, the open file descriptor will not be closed automatically.
Related
I'm trying to set up a set of folders on a Windows NT server 2008 (yes, I know, old) where a user doesn't have access to see the list of files within the folder, but can read a file if they know the full file path.
So I've set up the following AD permissions:
Permissions on the containing folder ("This folder only"):
Traverse folder / execute file
Read attributes
Read extended attributes
Read permissions
Permissions on the files ("Files only"):
Traverse folder / execute file
List folder / read data
Read attributes
Read extended attributes
Read permissions
... and from windows, everything looks great! I can't see inside the folder, but if I know the full path to a file within, I can type it into an address bar and open the file.
But when I run in Command Prompt:
COPY "FullPathToSameFileAsBefore.txt" "C:\someLocalSpot.txt"
... I get:
Access is denied.
0 file(s) copied.
Any ideas? Is there some special access Command Prompt needs to perform the copy that windows doesn't in order to read the file? Any alternatives that will work instead? I can set any of the permissions that are needed, with the caveat that the user cannot see the list of files within the directory.
EDIT with additional info:
So I tried to perform a copy with VBScript using a FileSystemObject. Same error. But using VBScript to read the file with an ADODB binary stream does work.
So it seems to boil down to "You can read this file, but you can't perform a copy." Which seems weird, since if you can read the file, you can certainly copy it (read it, then write the contents someplace else.)
I have a batch file that copies the most recent version of an access front-end file to the user's C: drive and then opens it from there.
For some users, the copy command causes the batch file to close, and I can't work out what could cause that. The file seems to copy, but the batch file just closes itself without any visible error messages.
I've used Pause to confirm that the failure happens at the Copy step, not the Run or the If.
This is Windows 7, I've tried it with Copy and Xcopy. The users with the issue say it's worked in the past, they all have access to the location being copied from (and to). Mapping the location doesn't seem to make any difference, and UNC paths work for most users so it's not that.
Deleting the existing files in C:\databases doesn't help.
if not exist "C:\Databases\" mkdir "C:\Databases"
copy "\\SERVER02\FINOPS\COMMAQR\DIGIHUB\1. Live Version\DIGIHUB v2.5.accdb" "C:\Databases\"
start [the file]
For 95%+ of users, the batch file copies the most recent version down and opens the file. For a handful, the batch file reaches the copy step and closes itself.
Does anyone know why this could happen, or alternatives to both Copy and XCopy that might not fail?
If I open a file in Windows, that was downloaded by chrome or another browser Windows popups a warning, that this file is downloaded from the internet. The same for documents you open in Microsoft Word.
But how does windows know that this file originate from the Internet? I think it's the same file as every other file on my hard drive. Has it to do something with the file properties?
Harry Johnston got it!
Had nothing to do with temporary folder or media cache. It's a NTFS Stream.
For further reading: MSDN File Streams
This blocking information is archieved with the following commands on the CLI:
(echo [ZoneTransfer]
More? echo ZoneId=3) > test.docx:Zone.Identifier
This creates an alternative file stream.
When you download any file from internet. It first downloaded in Media Cache instead of temp folder. Only after that it moves to actual location where you select to save that file.
If you copy and paste some file then it move that file through Temp folder only. Before opening any file windows check the location and if it is Media Folder then you get the error "File is downloading or other errors related to this".
I have a background process that I do not want to restart. Its output is actively being logged to a file.
nohup mycommand 1> myoutputfile.log 2>&1 &
I want to "archive" the file the process is currently writing its output to, and make it start writing to a blank file at the same file name. I must be able to do this without having to kill the process and start it again.
I tried simply renaming the existing file (to myoutputfile_.log), hoping that the shell now finding that the file is no longer there, will create a new file with the original file name (myoutputfile.log). But this does not work as the shell holds a reference to the file's location and keeps appending to it.
I looked here. On executing ls, I see that the streams are now marked as (deleted) but I'm quite confused what to do next. In the gdb command, do I have to specify the process executable in addition to the process ID? What happens if I don't specify it or I get it wrong? Once in gdb, how do I force the stream to re-create a file in the deleted file's same location (same path and filename)?
How can I use the commands in shell to signal it to start a new file for an existing process's output redirection?
PS: I can't do a trial-and-error because it's rather important I get this right. If it is relevant to know, this is a java process.
I resolved this issue by doing the following:
cp myoutputfile.log myoutputfile_.log; echo > myoutputfile.log
This essentially reset the log file after copying the original contents to a new file.
I did the following:
nohup find / &
rm nohup.out
Oddly, the nohup -command continued to run. I awaited for a new file to be created. For my surprise there was no such file. Where did the stdout of the command go?
Removing a file in UNIX does two things:
it removes the directory entry for it.
if no processes have it open and no other directory entries point to it (hard links), it releases the space.
Your nohupped process will gladly continue to write to the file that used to be called nohup.out, but is now known as nothing but a file descriptor within that process.
You can even have another process create a nohup.out, it won't interfere with the first.
When all hard links are gone, and all processes have closed it, the disk space will be recovered.
if you will delete the nohup.out file, the handle will be lost and it will write only to the file descriptor but if you want to clean out the nohup.out file then just run this
true > nohup.out
it will delete the contents of the file but not the file.
That's standard behaviour on Unix.
When you removed the file, the last link in the file system was removed, but the file was still open and therefore the output of find (in this case) was written to disk blocks in the kernel buffer pool, and possibly even to disk. But the file had no name. When find exited, no process or file (inode) referenced the file, so the space was released. This is one way that temporary files that will vanish when a program exits are created - by opening the file and then removing it. (This presumes you do not need a name for the file; clearly, if you need a name for the temporary, this technique won't work.)
cat /dev/null > nohup.out
from here