I was contemplating moving to a version control system at work, but the learning curve may be too much for the many copywriters that open simple html files and editt them on our shared development server. The main issue is that sometimes two people will work on the same file (on our development server) at once and overwrite each other.
Is there any extension to Windows explorer that will simply display a lock icon near a shared file that is already in use? For us, something like this maybe be simpler than teaching everyone to develop from their own working copies and use version control clients. I just want a visible warning to users that a file is already in use and should not be worked on. Thanks
There might not even be enough information on the fileserver itself to determine this. For example, if you open an HTML file in Notepad, the file is loaded from disk and then the file is closed. Notepad keeps a copy in memory without keeping the file open on disk. This means that the fileserver doesn't even know that somebody is busy editing the file.
Some text editors might keep the file open but this is probably the exception rather than the rule.
A version control system (Subversion with TortoiseSVN is easy for people to use) allows users to declare their intent without relying on the underlying technology opening files in just the right way. TortoiseSVN displays a "lock" icon beside files that are locked (you lock a file with a "Get Lock" menu option). Files without the lock are marked read-only to help the user know that they aren't ready to edit them yet.
Related
I want to put some sort of "hook" into windows (only has to work on Windows Server 2008 R2 and above) which when I ask for a file on disk and it's not there it then requests it from a web server and caches it locally.
The files are immutable and have unique file names.
The application which is trying to open these files is written in C and just opens a file using the operating system in the normal way. Say it calls OpenFile asking for c:\scripts\1234.12.script, and that is there then it will just open it normally. If then it asks for c:\scripts\1234.13.script and it isn't then my hook in the operating system will then go and ask my web service for the file, download it and then return that file as it it were there all the time.
I'd prefer to write this as a usermode process (I've never written a windows driver), it should only fire when files are not found in a specific folder, and I'd prefer if possible to write it in a managed language (C# would be perfect). The files are small (< 50kB) and the web service is fast and the internet connection blinding so I'm not expecting it to take more than a second to download the file.
My question is - where do I start looking for information about this kind of thing? And if anyone has done anything similar - do you know what options I have (eg can it be done in C#?)?
You would need to create a kernel-mode filesystem filter driver which would intercept requests for opening such files and would "fake" those files. I should say that this is a very complicated task even for driver development. Our CallbackFilter product would be able to solve your problem however mechanism for "faking" files is not yet ready (we plan this feature for CallbackFilter 3). Until then I don't know any user-mode solutions (frankly speaking, no kernel-mode solutions as well) that would solve your problem.
If you can change the folder the application is accessing, then you can create a virtual file system and map it to the drive letter or a folder on NTFS drive. From the virtual file system you can direct most requests to/from real disk and if the file doesn't exist, you can download the file and cache it. Our other product, Callback File System, lets you do what I described in user-mode. If you have a one-time task you need to accomplish, and don't have a budget for it, please contact us anyway and maybe we can find some solution. There also exists an open-source solution with similar (but not so comprehensive) functionality named Dokan, yet I will refrain from commenting on its quality.
You can also try Dokan , it open source and you can check its discussion group for question and guides.
I am trying to figure out when and how does Windows update File Access Times on files.
First of all, most Windows installs come with File Access Times disabled for performance reasons, so before wrapping your head around it here is what you need to do in order to activate last access times on NTFS file systems: modify the key [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem] value name NtfsDisableLastAccessUpdate to DWORD 0 value data(if it is set to 1 of course). If it doesn't exist just create it.
Upon reading File Times article on MSDN i am still in doubt as to how Windows updates access times.
My questions are as follow:
Do access times update upon issuing a WinApi CreateFile() with FILE_READ_ATTRIBUTES ? In my case, while doing it programmatically, it doesn't. Opening up the File Properties dialog of that file through the Explorer Shell does update the access time.
Do access times update upon issuing a WinApi ExtractIconEx() to read an icon from a file?
In my case doing so programatically, it doesn't. Opening up the File Properties dialog of that file through the Explorer Shell does update the access time.
If you ask me, both of those cases should update the file access times, but it seems to me that direct WinApi calls don't update them or Window/NTFS driver really lags behind, while operating on files from Windows Explorer do update pretty well. What do you think is or could be the issue here?
As a side note, i did do CloseHandle() as per:
The only guarantee about a file
timestamp is that the file time is
correctly reflected when the handle
that makes the change is closed.
My end conclusion is that, indeed the opinions lying around the web are true and Windows does update File Access Times in a random fashion and thus one really shouldn't in no way depend on Windows File Access Times.
Off-topic rant: Sorry forensics guys, you'll have to prove access times using another method or you can have your case invalided in seconds. :P
No, accessing the metadata of the file isn't going to change the last access time (name, attributes, timestamps). Wouldn't work well in practice, just looking at the directory with Explorer would change it. You have to actually open the file. ExtractIconEx() would normally be an excellent candidate, except that Windows can play tricks with it. A hidden desktop.ini file can redirect the icon to another file.
Using the last access time is pretty worthless for forensics. You'd need a file system filter driver. Similar to the one embedded in SysInternals' ProcMon utility. It might be using ETW btw, that got pretty powerful at Vista time. Nevertheless, your project just got 10 times more complicated.
I have a build script where i create a text report file and output various log type stuff to it. The data is all being built onto an external hd which (according to 'mount') has file format "fuseblk" (which i've never heard of).
The building all seems to work ok but my report files are being saved as executables, which linux interprets as SOR files. I'd like them to just be regular text files, openable by default in my regular text editor.
I'm making the file, and writing to it, like this:
#report = File.open(File.join(DESTINATION_BUILD_FOLDER, "#{title.folder_name}_report.txt"),"w")
...
s = "making modules folder inside resource_library folder";puts s; #report.puts s
...
#report.close
I've done this lots of times before and never encountered this problem. Any ideas anyone?
cheers, max
ps i know that i can edit the saved files to make them non-executable, my question is 'why is this happening in the first place?'. Cheers :)
I don't think there's anything wrong with your program. The fuseblk just means it's being mounted through FUSE, which allows filesystem drivers to run as userspace programs, instead of kernel modules. Most likely, the filesystem is NTFS or FAT32.
The problem here is that Linux is assuming everything on the drive has the execute bit set. This is because neither NTFS nor FAT32 have the capability to store Linux permission bits (NTFS has a very different permissions system, FAT32 has virtually none). And I bet you're trying to double-click on the log files in something like the gnome file explorer, right?
Well, go there with the command line and use less or your favorite command-line editor to view them. Or right click on them in the file explorer, or open them with File -> Open from a text editor. If you ask your question to people who know Gnome (or KDE?) better, you'll probably get a better answer.
To me its a no-brainer. The settings for my program go into the Windows Registry. After all, that's what it's for, isn't it?
But some programmers are still hesitant in using the Registry. They state that as it grows it slows down your computer. Or they state that it gets corrupted and causes your computer to malfunction.
So they write their own configuration files, or may use the INI files that Microsoft has depreciated since a few OS's ago.
From what I hear, the problems with the registry that occurred in early Windows OS's were mostly fixed as of Windows XP. It may be the plethora of companies that make Registry Cleaners that are keeping up the rumors that "registry bloat" and "orphaned entries" are still bad.
So I ask, is there any reason today not to use the Windows Registry to store my program configuration settings?
If the user does not allow registry access, you're screwed.
If the user reinstalls Windows and he wants to migrate his settings, it's much more complicated than with a simple file
Working with a config file means your app is portable
Much simpler for the user to change a setting manually
When you'll want to port your app to other OS, what are you gonna do with your registry settings ?
Windows Registry is bloated. Do you really want to contribute to this chaos?
For me, quickly installing, migrating and moving applications is a key point to productivity. I can't if I need to care of hundreds of possible registry keys. If there's a simple .ini or .cfg or .xml file somewhere in my user folder (or even the application directory if it is a portable app), migration is easy.
Often-heard argument pro registry: easy to write and read (assuming you're using plain WinAPI). Really? I consider the RegXXXfamily of functions pretty verbose ... too many function calls and typing work for storing just a few bits of information. So you always end up wrapping the registry away .. and now compare this effort with a simple text configuration file, maybe just key=value-like.
It depends, when you have small entries that need to read by multiple programs registry is ok, as database have locking issues, and config files are application based.
The problem happens when the user does not allow registry access, that are lots of software in the market that will show a pop up when anyone tries to modify registry and the user can cancel or allow the users. These programs are too common with the anti virus programs.
Putting your settings into the Registry means that if your user wants to move your program and its settings to another computer, he can't. Backup, ditto. Those settings are in a mysterious invisible place. I find this to be a hostile approach to one's users.
I've written numerous small-to-medium programs, and always used a .ini file. A tech-savy user can edit this file using an editor, he can check the settings in it, he can email it to a tech supporter, he can do a large variety of things that are significantly harder to do with registry entries.
And my programs don't contribute to slowing the computer down.
Personally speaking, I just don't like binary configuration of any type. I much prefer text file format which can be easily copied, edited, diffed & merged, and put under change control complete with history.
The last of these is the biggest reason not to use the registry - I can stick configuration files into SVN (or similar) with the full support given to text files, instead of having to treat it as a blob.
I don't really have much of an opinion for or against using the registry, but I'd like to note something... Many answers here indicate that registry access may be restricted for a certain user. I'd say the exact same thing goes for config files.
With registry you need to write to the "current user" to be fairly certain about having access (and should do so anyway, in many cases). Config files should be put in a user based area as well (e.g. AppData/Local) if you want "guaranteed" access without questions asked. As far as I know putting config files in "global" areas are as likely to yield access problems as the registry is.
UNIX file-locking is dead-easy: The operating system assumes that you know what you are doing and lets you do what you want:
For example, if you try to delete a file which another process has opened the operating system will usually let you do it. The original process still keeps it's file-handles until it terminates - at which point the the file-system will quietly re-cycle the disk-resources. No fuss, that's the way I like it.
How different things are on Windows: If I try to delete a file which another process is using I get an Operating-System error. The file is untouchable until the original process releases it's lock on the file. That was great back in the single-user days of MS-DOS when any locking process was likely to be on the same computer that contained the files, however on a network it's a nightmare:
Consider what happens when a process hangs while writing to a shared file on a Windows file-server. Before the file can be deleted we have to locate the computer and ID the process on that computer which originally opened the file. Only then can we kill the process and delete our unwanted file.
What a nuisance!
Is there a way to make this better? What I want is for file-locking on Windows to behave a like file-locking in UNIX. I want the operating system to just let me do what I want because I'm in charge and I know what I'm doing...
...so can it be done?
No. Windows is designed for the "average user", that is people who don't understand anything about a computer. Therefore, the OS tries to be smart to avoid PEBKACs. To quote Bill Gates: "There are no issues with Windows that any number of people want to be fixed." Of course, he knows that 99.9999% of all Windows users can't tell whether the program just did something odd because of them or the guy who wrote it.
Unix was designed when the world was more simple and anyone close enough to a computer to touch it, probably knew how to assemble it from dirty sand. Therefore, the OS usually lets you do what you want because it assumes that you know better (and if you didn't, you will next time).
Technical answer: Unix allocates an "i-nodes" if you create a file. I-nodes can be shared between processes. If two processes create the same file (that is, two processes call create() with the same path), then you end up with two i-nodes. This is by design. It allows for a fancy security feature: You can create files which no one can open but yourself:
Open a file
Delete it (but keep the file handle)
Use the file any way you like
Close the file
After step #2, the only process in the universe who can access the file is the one who created it (unless you want to read the hard disk block by block). The OS will keep the data alive until you either close the file or your process dies (at which time Unix will clean up after you).
This design is the foundation of all Unix filesystems. The Windows file system NTFS works much the same way but the high level API is different. Many applications open files in exclusive mode (which prevents anyone, even backup programs) to read the file. This is even true for applications which just display information like PDF viewers.
That means you'll have to fix all the Windows applications to achieve the desired effect. If you have access to the source, you can create a file in a shared mode. That would allow other processes to access it at the same time but then, you will have to check before every read/write if the file still exists, whether someone has made changes, etc.
According to MSDN you can specify to CreateFile() 3rd parameter (dwSharedMode) shared mode flag FILE_SHARE_DELETE which:
Enables subsequent open operations on a file or device to request delete access.
Otherwise, other processes cannot open the file or device if they request delete access.
If this flag is not specified, but the file or device has been opened for delete access, the function fails.
Note Delete access allows both delete and rename operations.
http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx
So if you're can control your applications you can use this flag.
Note that Process Explorer allow for force closing of file handles (for processes local to the box on which you are running it) via Handle -> Close Handle.
Unlocker purports to do a lot more, and provides a helpful list of other tools.
Also deleting on reboot is an option (though this sounds like not what you want)
That doesn't really help if the hung process still has the handle open. It won't release the resources until that hung process releases the handle. But anyway, in Windows it is possible to force close a file out from under a process that's using it. Process Explorer from sysinternals.com will let you look at and close handles that a process has open.