A system I have come across, that uses active directory, and has disk quotas, does not have the quotas tranparent to the user. All the users displays in windows (my computer etc.) and calls to GetDiskFreeSpaceEx always return the free space of the volume, and yet the user can never fill this free space because of the quotas. I have not been able to figure out anyway to know the size of the quota, and on the users PC's we have not been able to achieve anything to get these values.
It seems like somehow the quotas are applied on a directory level - and then users are limited to writing to certain directories. So the users quotas always show up as the free space on the disk, even though they cannot really write anything near this amount to any of the directories they have access to.
Has anyone come across something like this and know a winapi/msdn article about this. I am trying to write my program to figure out what free space a mapped active directory drive has for the user.
If you need to do anything to do with administration on Windows the place to start looking is usually WMI. There's a class called Win32_DiskQuota that has a Limit property.
This Technet blogpost have some sample code for calling this method from VBScript, which wouldn't be that hard to translate to C# or VB.Net (look at System.Management), or if you prefer C++, here's some samples showing how to use WMI from C++.
Related
I have been searching everywhere for all the combinations of things that I want to accomplish hoping something would pop but I can't find anything. Additionally, I am not sure if I am "crafting" my query properly enough so I am hoping I can get some assistance on that here.
What I would like to accomplish is this (pseudo logic)
Create a single container file, for example: vdata.x which will contain everything in it as a single data file
Mount this file as an actual drive/folder in Windows so that you can write to, read from, delete/modify the content as if you were using Windows Explorer. Visible to FS, applications, system/commandline like any other "real" folder on the machine.
Prefer the ability to have this file reside on a thumbdrive and have it mounted either automatically or manually after plugged in and have it show up not as the thumbdrive but as the file inside it, or mount both doesn't matter.
Additionally the ability for that file to be locked, encrypted and accessible (despite auto mounting, if that's the case) after it have been authenticated with a password, random token or whatnot.
Finally a housekeeping element, such as being aware of its available "host" space (aka the thumbdrive) so that as it reaches a certain threshold of expansion, it says, hey move me to a larger device, make room or stop adding more, something akin to, running out of space warning.
I thought about putting this in software recommendation SE but that is not fully up and running yet (at last check) and plus the range of who access that sub-se might very limited, so I am asking here to get feedback and discussion to see if we can answer it better here or it needs to move to there.
Thank you in advance and hope to get some brilliant minds out there to help me accomplish this.
PS. I am not averse to building something like this myself but I am limited in time and health and plus if its already done, why reinvent the wheel right? But if anything could help launch the development of such a tool, I would take that input as well, thank you.
Preamble:
Recently I came across an interesting story about people who seem to be sending emails with documents that contain child pornography. This is an example (this one is jpeg but im hearing about it being done with PDFs, which generally cant be previewed)
https://www.youtube.com/watch?v=zislzpkpvZc
This can pose a real threat to people in investigative journalism, because even if you delete the file after its been opened in Temp the file may still be recovered by forensics software. Even just having opened the file already puts you in the realm of committing a felony.
This also can pose a real problem to security consultants for a group. Lets say person A emails criminal files, person B is suspicious of email and forwards it to security manager for their program. In order to analyze the file the consultant may have to download it on a harddrive, even if they load it in a VM or Sandbox. Even if they figure out what it is they are still in this legal landmine area that bad timing could land them in jail for 20 years. Thinking about this if the memory was to only enter the RAM then upon a power down all traces of this opened file would disappear.
Question: I have an OK understanding about how computer architecture works, but this problem presented earlier made me start wondering. Is there a limitation, at the OS, hardware, or firmware level, that prevents a program from opening a stream of downloading information directly to the RAM? If not let's say you try to open a pdf, is it possible for the file it's opening to instead be passed to the program as a stream of downloading bytes that could then rewrite/otherwise make retention of the final file on the hdd impossible?
Unfortunately I can only give a Linux/Unix based answer to this, but hopefully it is helpful and extends to Windows too.
There are many ways to pass data between programs without writing to the hard disk, it is usually more of a question of whether the software applications support it (web browser and pdf reader for your example). Streams can be passed via pipes and sockets, but the problem here is that it may be more convenient for the receiving program to seek back in the stream at certain points rather than store all the data in memory. This may be a more efficient use of resources too. Hence many programs do not do this. Indeed a pipe can be made to look like a file, but if the application tries to seek backward, it will cause an error.
If there was more demand for streaming data to applications, it would probably be seen in more cases though as there are no major barriers. Currently it is more common just to store pdfs in a temporary file if they are viewed in a plugin and not downloaded. Video can be different though.
An alternative is to use a RAM drive, it is common for a Linux system to have at least one set up by default (tmpfs), although it seems for Windows that you have to install additional software. Using one of these removes the above limitations and it is fairly easy to set a web browser to use it for temporary files.
I'm using a commandline tool to do some processing on a file. The thing is that this file should not be stored on disk (security reasons). So I was wondering whether it's possible in Windows to use a part of memory as a virtual file that is accessible bu the commandline tool as if it was a real physical file.
Yes, it's possible with things referred to as "ramdisks" usually. What's the best ramdisk for Windows? over at superuser.com has some links.
Have you written the command line tool yourself? If so, you can simply allocate a section of memory to your program and use it in your processing. There's little reason to trick the app into thinking it's using a file on a physical disk. The specifics on how to do so depend on what language your app is written in.
If not, you'll need to create a RAM disk and tell the program to use that. Using a RAM disk on Windows requires third-party software; a comprehensive list of options is available here on Super User.
Note, though, neither using a RAM disk nor storing all of your data in memory will make it more secure. The information stored in RAM is just as accessible to prying eyes and malicious applications as data that is saved on the hard disk. Probably more so than data that has been deleted from the hard disk.
If you need a ready to use application, there are several ramdisk applications (including free ones) on the market, and then your question here is offtopic. If you need to do this in code, than one of our virtual storage products (SolFS, CallbackDisk, Callback File System) will work, and Callback File System has a sample project that stores files in memory.
If you're using .NET, you might look into MemoryStream.
Note Cody Gray's answer though, which is only too true insofar as having something in memory does not guarantee that it can't be compromised. Though opinions differ on this subject. Most people would argue that writing to disk is even less secure, especially in the age of wear-levelling where controlling what is deleted and what is not is practically impossible.
RAM has its own disadvantages, but on the positive side, what's gone is gone :-)
Whiteboard Overview
The images below are 1000 x 750 px, ~130 kB JPEGs hosted on ImageShack.
Internal
Global
Additional Information
I should mention that each user (of the client boxes) will be working straight off the /Foo share. Due to the nature of the business, users will never need to see or work on each other's documents concurrently, so conflicts of this nature will never be a problem. Access needs to be as simple as possible for them, which probably means mapping a drive to their respective /Foo/username sub-directory.
Additionally, no one but my applications (in-house and the ones on the server) will be using the FTP directory directly.
Possible Implementations
Unfortunately, it doesn't look like I can use off the shelf tools such as WinSCP because some other logic needs to be intimately tied into the process.
I figure there are two simple ways for me to accomplishing the above on the in-house side.
Method one (slow):
Walk the /Foo directory tree every N minutes.
Diff with previous tree using a combination of timestamps (can be faked by file copying tools, but not relevant in this case) and check-summation.
Merge changes with off-site FTP server.
Method two:
Register for directory change notifications (e.g., using ReadDirectoryChangesW from the WinAPI, or FileSystemWatcher if using .NET).
Log changes.
Merge changes with off-site FTP server every N minutes.
I'll probably end up using something like the second method due to performance considerations.
Problem
Since this synchronization must take place during business hours, the first problem that arises is during the off-site upload stage.
While I'm transferring a file off-site, I effectively need to prevent the users from writing to the file (e.g., use CreateFile with FILE_SHARE_READ or something) while I'm reading from it. The internet upstream speeds at their office are nowhere near symmetrical to the file sizes they'll be working with, so it's quite possible that they'll come back to the file and attempt to modify it while I'm still reading from it.
Possible Solution
The easiest solution to the above problem would be to create a copy of the file(s) in question elsewhere on the file-system and transfer those "snapshots" without disturbance.
The files (some will be binary) that these guys will be working with are relatively small, probably ≤20 MB, so copying (and therefore temporarily locking) them will be almost instant. The chances of them attempting to write to the file in the same instant that I'm copying it should be close to nil.
This solution seems kind of ugly, though, and I'm pretty sure there's a better way to handle this type of problem.
One thing that comes to mind is something like a file system filter that takes care of the replication and synchronization at the IRP level, kind of like what some A/Vs do. This is overkill for my project, however.
Questions
This is the first time that I've had to deal with this type of problem, so perhaps I'm thinking too much into it.
I'm interested in clean solutions that don't require going overboard with the complexity of their implementations. Perhaps I've missed something in the WinAPI that handles this problem gracefully?
I haven't decided what I'll be writing this in, but I'm comfortable with: C, C++, C#, D, and Perl.
After the discussions in the comments my proposal would be like so:
Create a partition on your data server, about 5GB for safety.
Create a Windows Service Project in C# that would monitor your data driver / location.
When a file has been modified then create a local copy of the file, containing the same directory structure and place on the new partition.
Create another service that would do the following:
Monitor Bandwidth Usages
Monitor file creations on the temporary partition.
Transfer several files at a time (Use Threading) to your FTP Server, abiding by the bandwidth usages at the current time, decreasing / increasing the worker threads depending on network traffic.
Remove the files from the partition that have successfully transferred.
So basically you have your drives:
C: Windows Installation
D: Share Storage
X: Temporary Partition
Then you would have following services:
LocalMirrorService - Watches D: and copies to X: with the dir structure
TransferClientService - Moves files from X: to ftp server, removes from X:
Also use multi threads to move multiples and monitors bandwidth.
I would bet that this is the idea that you had in mind but this seems like a reasonable approach as long as your really good with your application development and your able create a solid system that would handle most issues.
When a user edits a document in Microsoft Word for instance, the file will change on the share and it may be copied to X: even though the user is still working on it, within windows there would be an API see if the file handle is still opened by the user, if this is the case then you can just create a hook to watch when the user actually closes the document so that all there edits are complete, then you can migrate to drive X:.
this being said that if the user is working on the document and there PC crashes for some reason, the document / files handle may not get released until the document is opened at a later date, thus causing issues.
For anyone in a similar situation (I'm assuming the person who asked the question implemented a solution long ago), I would suggest an implementation of rsync.
rsync.net's Windows Backup Agent does what is described in method 1, and can be run as a service as well (see "Advanced Usage"). Though I'm not entirely sure if it has built-in bandwidth limiting...
Another (probably better) solution that does have bandwidth limiting is Duplicati. It also properly backs up currently-open or locked files. Uses SharpRSync, a managed rsync implementation, for its backend. Open source too, which is always a plus!
I need to find the underlying disk capacity (total size) of an unmapped network share in windows (in Win7, Vista, XP, Server 2008), given a UNC path (e.g. given something like "\\share_1\subdir").
I've looked all over the web for several days and seem to find no answer to this issue. I would appreciate any leads. Thanks in advance for your time!
I would have given up by now, if it weren't for the ability to find the underlying free space of unmapped network shares, using the "GetDiskFreeSpaceEx()" Win32 function. I imagine that disk capacity is stored in a similar fashion to free space, hence retrieving it would be very similar (hence I'm somewhat infuriated with MS for not making the functionality obvious, or myself for not being able to find it thus far!)
Regards,
vivri
You are on the right track. GetDiskFreeSpaceEx will also display capacity, you just have to call the correct members.
See this Microsoft support link on how to do it.
Keep in mind that GetDiskFreeSpaceEx may only retrieve the free disk space by user. For instance, Windows Explorer uses GetDiskFreeSpaceEx also and it may not report the actual free physical disk space, but rather the user's logged in quota.